Test Report: Hyper-V_Windows 18774

                    
                      9d63d58ff18723161685b0b8e892cfd1b7c2a23e:2024-04-29:34260
                    
                

Test fail (21/198)

x
+
TestAddons/parallel/Registry (72.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 22.1431ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-894w8" [75b2dc4f-95c0-4239-b5fb-3b21d4a53327] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.020189s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-t4ccx" [25144203-305f-40fb-9c65-8c59773521bc] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0198907s
addons_test.go:340: (dbg) Run:  kubectl --context addons-442400 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-442400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-442400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.5688117s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 ip: (2.7358618s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0429 18:48:26.123998    1664 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-442400 ip"
2024/04/29 18:48:28 [DEBUG] GET http://172.17.248.23:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 addons disable registry --alsologtostderr -v=1: (15.9778068s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-442400 -n addons-442400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-442400 -n addons-442400: (13.2109157s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 logs -n 25: (9.4211141s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-029800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | -p download-only-029800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| delete  | -p download-only-029800                                                                     | download-only-029800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| start   | -o=json --download-only                                                                     | download-only-657800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | -p download-only-657800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:41 UTC | 29 Apr 24 18:41 UTC |
	| delete  | -p download-only-657800                                                                     | download-only-657800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:41 UTC | 29 Apr 24 18:41 UTC |
	| delete  | -p download-only-029800                                                                     | download-only-029800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:41 UTC | 29 Apr 24 18:41 UTC |
	| delete  | -p download-only-657800                                                                     | download-only-657800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:41 UTC | 29 Apr 24 18:41 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-491300 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:41 UTC |                     |
	|         | binary-mirror-491300                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:52224                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-491300                                                                     | binary-mirror-491300 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:41 UTC | 29 Apr 24 18:41 UTC |
	| addons  | enable dashboard -p                                                                         | addons-442400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:41 UTC |                     |
	|         | addons-442400                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-442400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:41 UTC |                     |
	|         | addons-442400                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-442400 --wait=true                                                                | addons-442400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:41 UTC | 29 Apr 24 18:48 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-442400 addons                                                                        | addons-442400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:48 UTC | 29 Apr 24 18:48 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-442400 ssh cat                                                                       | addons-442400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:48 UTC | 29 Apr 24 18:48 UTC |
	|         | /opt/local-path-provisioner/pvc-615aeca5-4422-4969-87de-5534dc276d28_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-442400 ip                                                                            | addons-442400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:48 UTC | 29 Apr 24 18:48 UTC |
	| addons  | addons-442400 addons disable                                                                | addons-442400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:48 UTC | 29 Apr 24 18:48 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-442400 addons disable                                                                | addons-442400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:48 UTC | 29 Apr 24 18:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-442400 addons disable                                                                | addons-442400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:48 UTC |                     |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-442400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:48 UTC |                     |
	|         | addons-442400                                                                               |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 18:41:24
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 18:41:24.364362    5088 out.go:291] Setting OutFile to fd 308 ...
	I0429 18:41:24.364694    5088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:41:24.364694    5088 out.go:304] Setting ErrFile to fd 620...
	I0429 18:41:24.364694    5088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:41:24.389842    5088 out.go:298] Setting JSON to false
	I0429 18:41:24.395013    5088 start.go:129] hostinfo: {"hostname":"minikube6","uptime":18023,"bootTime":1714398060,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 18:41:24.395222    5088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 18:41:24.401242    5088 out.go:177] * [addons-442400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 18:41:24.405227    5088 notify.go:220] Checking for updates...
	I0429 18:41:24.407127    5088 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 18:41:24.410616    5088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 18:41:24.413226    5088 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 18:41:24.415844    5088 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 18:41:24.418445    5088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 18:41:24.421852    5088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:41:30.038913    5088 out.go:177] * Using the hyperv driver based on user configuration
	I0429 18:41:30.047705    5088 start.go:297] selected driver: hyperv
	I0429 18:41:30.047830    5088 start.go:901] validating driver "hyperv" against <nil>
	I0429 18:41:30.047830    5088 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 18:41:30.105403    5088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 18:41:30.107061    5088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 18:41:30.107061    5088 cni.go:84] Creating CNI manager for ""
	I0429 18:41:30.107061    5088 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 18:41:30.107061    5088 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 18:41:30.107061    5088 start.go:340] cluster config:
	{Name:addons-442400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-442400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:41:30.107061    5088 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:41:30.116729    5088 out.go:177] * Starting "addons-442400" primary control-plane node in "addons-442400" cluster
	I0429 18:41:30.121825    5088 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 18:41:30.122010    5088 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 18:41:30.122010    5088 cache.go:56] Caching tarball of preloaded images
	I0429 18:41:30.122400    5088 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 18:41:30.122400    5088 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 18:41:30.122819    5088 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\config.json ...
	I0429 18:41:30.122819    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\config.json: {Name:mk6ac6cc29019ea28f2bcdae2aa6465c09a289c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:41:30.124570    5088 start.go:360] acquireMachinesLock for addons-442400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 18:41:30.124570    5088 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-442400"
	I0429 18:41:30.125064    5088 start.go:93] Provisioning new machine with config: &{Name:addons-442400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:addons-442400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 18:41:30.125064    5088 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 18:41:30.128790    5088 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0429 18:41:30.128790    5088 start.go:159] libmachine.API.Create for "addons-442400" (driver="hyperv")
	I0429 18:41:30.128790    5088 client.go:168] LocalClient.Create starting
	I0429 18:41:30.129908    5088 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 18:41:30.205484    5088 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 18:41:30.390518    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 18:41:32.850599    5088 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 18:41:32.850599    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:41:32.850599    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 18:41:34.676671    5088 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 18:41:34.676671    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:41:34.676978    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 18:41:36.205940    5088 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 18:41:36.205940    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:41:36.205940    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 18:41:40.111239    5088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 18:41:40.111239    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:41:40.114248    5088 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 18:41:40.640872    5088 main.go:141] libmachine: Creating SSH key...
	I0429 18:41:40.799540    5088 main.go:141] libmachine: Creating VM...
	I0429 18:41:40.799540    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 18:41:43.680456    5088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 18:41:43.681083    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:41:43.681148    5088 main.go:141] libmachine: Using switch "Default Switch"
	I0429 18:41:43.681148    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 18:41:45.541159    5088 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 18:41:45.541258    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:41:45.541298    5088 main.go:141] libmachine: Creating VHD
	I0429 18:41:45.541298    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 18:41:49.314142    5088 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FFA1D664-4CAE-497E-AE3E-E53CEBF4DB79
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 18:41:49.314142    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:41:49.314142    5088 main.go:141] libmachine: Writing magic tar header
	I0429 18:41:49.314637    5088 main.go:141] libmachine: Writing SSH key tar header
	I0429 18:41:49.326117    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 18:41:52.544864    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:41:52.544864    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:41:52.544967    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\disk.vhd' -SizeBytes 20000MB
	I0429 18:41:55.073656    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:41:55.073656    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:41:55.073957    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-442400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0429 18:41:58.940678    5088 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-442400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 18:41:58.941257    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:41:58.941257    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-442400 -DynamicMemoryEnabled $false
	I0429 18:42:01.203700    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:42:01.204442    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:01.204442    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-442400 -Count 2
	I0429 18:42:03.418234    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:42:03.418701    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:03.418701    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-442400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\boot2docker.iso'
	I0429 18:42:06.073487    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:42:06.073749    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:06.073749    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-442400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\disk.vhd'
	I0429 18:42:08.754930    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:42:08.755674    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:08.755674    5088 main.go:141] libmachine: Starting VM...
	I0429 18:42:08.755797    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-442400
	I0429 18:42:11.898063    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:42:11.898063    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:11.898063    5088 main.go:141] libmachine: Waiting for host to start...
	I0429 18:42:11.898063    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:42:14.167975    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:42:14.167975    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:14.167975    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:42:16.668650    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:42:16.669582    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:17.673577    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:42:19.909856    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:42:19.910820    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:19.910820    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:42:22.482096    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:42:22.482096    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:23.495287    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:42:25.695116    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:42:25.695116    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:25.695917    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:42:28.291407    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:42:28.291407    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:29.300095    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:42:31.561270    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:42:31.561270    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:31.561823    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:42:34.135187    5088 main.go:141] libmachine: [stdout =====>] : 
	I0429 18:42:34.135388    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:35.150961    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:42:37.373424    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:42:37.373424    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:37.373633    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:42:40.066419    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:42:40.067232    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:40.067323    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:42:42.230624    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:42:42.230624    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:42.230624    5088 machine.go:94] provisionDockerMachine start ...
	I0429 18:42:42.230624    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:42:44.433917    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:42:44.433917    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:44.434630    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:42:47.090473    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:42:47.090473    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:47.097733    5088 main.go:141] libmachine: Using SSH client type: native
	I0429 18:42:47.107981    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.23 22 <nil> <nil>}
	I0429 18:42:47.107981    5088 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 18:42:47.248307    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 18:42:47.248307    5088 buildroot.go:166] provisioning hostname "addons-442400"
	I0429 18:42:47.248307    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:42:49.435516    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:42:49.436236    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:49.436381    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:42:52.047129    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:42:52.047129    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:52.055766    5088 main.go:141] libmachine: Using SSH client type: native
	I0429 18:42:52.056505    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.23 22 <nil> <nil>}
	I0429 18:42:52.056505    5088 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-442400 && echo "addons-442400" | sudo tee /etc/hostname
	I0429 18:42:52.235493    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-442400
	
	I0429 18:42:52.235654    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:42:54.395824    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:42:54.396507    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:54.396507    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:42:57.031814    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:42:57.032241    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:57.039001    5088 main.go:141] libmachine: Using SSH client type: native
	I0429 18:42:57.039221    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.23 22 <nil> <nil>}
	I0429 18:42:57.039221    5088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-442400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-442400/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-442400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 18:42:57.197511    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 18:42:57.197511    5088 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 18:42:57.198464    5088 buildroot.go:174] setting up certificates
	I0429 18:42:57.198464    5088 provision.go:84] configureAuth start
	I0429 18:42:57.198464    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:42:59.382039    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:42:59.382039    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:42:59.382039    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:02.002280    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:02.002280    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:02.002280    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:04.141715    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:04.141715    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:04.141817    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:06.766464    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:06.766464    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:06.767628    5088 provision.go:143] copyHostCerts
	I0429 18:43:06.768153    5088 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 18:43:06.769679    5088 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 18:43:06.771330    5088 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 18:43:06.772879    5088 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-442400 san=[127.0.0.1 172.17.248.23 addons-442400 localhost minikube]
	I0429 18:43:07.036097    5088 provision.go:177] copyRemoteCerts
	I0429 18:43:07.050095    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 18:43:07.050095    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:09.206576    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:09.206778    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:09.206847    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:11.807017    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:11.807017    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:11.807813    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:43:11.920936    5088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8708051s)
	I0429 18:43:11.921767    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 18:43:11.969320    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 18:43:12.020780    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 18:43:12.073461    5088 provision.go:87] duration metric: took 14.8748871s to configureAuth
	I0429 18:43:12.073461    5088 buildroot.go:189] setting minikube options for container-runtime
	I0429 18:43:12.074364    5088 config.go:182] Loaded profile config "addons-442400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 18:43:12.074364    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:14.182803    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:14.183003    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:14.183003    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:16.778197    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:16.778880    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:16.788374    5088 main.go:141] libmachine: Using SSH client type: native
	I0429 18:43:16.788374    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.23 22 <nil> <nil>}
	I0429 18:43:16.788912    5088 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 18:43:16.937141    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 18:43:16.937141    5088 buildroot.go:70] root file system type: tmpfs
	I0429 18:43:16.937141    5088 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 18:43:16.937671    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:19.085201    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:19.085861    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:19.085861    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:21.703649    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:21.704370    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:21.711058    5088 main.go:141] libmachine: Using SSH client type: native
	I0429 18:43:21.711678    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.23 22 <nil> <nil>}
	I0429 18:43:21.711678    5088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 18:43:21.879311    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 18:43:21.879502    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:24.065471    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:24.065531    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:24.065531    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:26.645056    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:26.645167    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:26.653990    5088 main.go:141] libmachine: Using SSH client type: native
	I0429 18:43:26.654960    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.23 22 <nil> <nil>}
	I0429 18:43:26.654960    5088 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 18:43:28.864034    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 18:43:28.864034    5088 machine.go:97] duration metric: took 46.6330649s to provisionDockerMachine
	I0429 18:43:28.864206    5088 client.go:171] duration metric: took 1m58.7345268s to LocalClient.Create
	I0429 18:43:28.864206    5088 start.go:167] duration metric: took 1m58.7345268s to libmachine.API.Create "addons-442400"
	I0429 18:43:28.864206    5088 start.go:293] postStartSetup for "addons-442400" (driver="hyperv")
	I0429 18:43:28.864206    5088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 18:43:28.878826    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 18:43:28.878826    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:31.046813    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:31.046813    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:31.047518    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:33.664157    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:33.664783    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:33.665172    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:43:33.770050    5088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8911879s)
	I0429 18:43:33.784586    5088 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 18:43:33.793324    5088 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 18:43:33.793324    5088 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 18:43:33.793990    5088 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 18:43:33.794384    5088 start.go:296] duration metric: took 4.9301416s for postStartSetup
	I0429 18:43:33.796616    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:35.962923    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:35.962923    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:35.963782    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:38.572555    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:38.572555    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:38.572555    5088 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\config.json ...
	I0429 18:43:38.576649    5088 start.go:128] duration metric: took 2m8.4505852s to createHost
	I0429 18:43:38.576797    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:40.755491    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:40.755491    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:40.755491    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:43.336015    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:43.336859    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:43.343354    5088 main.go:141] libmachine: Using SSH client type: native
	I0429 18:43:43.344155    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.23 22 <nil> <nil>}
	I0429 18:43:43.344272    5088 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 18:43:43.490084    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714416223.503830934
	
	I0429 18:43:43.490226    5088 fix.go:216] guest clock: 1714416223.503830934
	I0429 18:43:43.490226    5088 fix.go:229] Guest: 2024-04-29 18:43:43.503830934 +0000 UTC Remote: 2024-04-29 18:43:38.5767314 +0000 UTC m=+134.420137701 (delta=4.927099534s)
	I0429 18:43:43.490403    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:45.628558    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:45.628558    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:45.628558    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:48.276929    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:48.276929    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:48.283887    5088 main.go:141] libmachine: Using SSH client type: native
	I0429 18:43:48.284787    5088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.23 22 <nil> <nil>}
	I0429 18:43:48.284787    5088 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714416223
	I0429 18:43:48.439093    5088 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 18:43:43 UTC 2024
	
	I0429 18:43:48.439212    5088 fix.go:236] clock set: Mon Apr 29 18:43:43 UTC 2024
	 (err=<nil>)
	I0429 18:43:48.439212    5088 start.go:83] releasing machines lock for "addons-442400", held for 2m18.313297s
	I0429 18:43:48.439500    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:50.596506    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:50.596506    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:50.597352    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:53.215717    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:53.216511    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:53.220880    5088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 18:43:53.221002    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:53.233235    5088 ssh_runner.go:195] Run: cat /version.json
	I0429 18:43:53.233235    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:43:55.451435    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:55.451435    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:55.451435    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:43:55.451435    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:55.451647    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:55.451647    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:43:58.109974    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:58.110478    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:58.111229    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:43:58.140749    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:43:58.141441    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:43:58.142189    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:43:58.271306    5088 ssh_runner.go:235] Completed: cat /version.json: (5.038033s)
	I0429 18:43:58.271306    5088 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0503879s)
	I0429 18:43:58.285497    5088 ssh_runner.go:195] Run: systemctl --version
	I0429 18:43:58.309684    5088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 18:43:58.319752    5088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 18:43:58.338623    5088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 18:43:58.371942    5088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 18:43:58.372071    5088 start.go:494] detecting cgroup driver to use...
	I0429 18:43:58.372485    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 18:43:58.424641    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 18:43:58.461915    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 18:43:58.483054    5088 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 18:43:58.497359    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 18:43:58.537599    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 18:43:58.573538    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 18:43:58.610984    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 18:43:58.646415    5088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 18:43:58.682182    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 18:43:58.718754    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 18:43:58.754200    5088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 18:43:58.794086    5088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 18:43:58.833051    5088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 18:43:58.869241    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:43:59.105180    5088 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 18:43:59.145345    5088 start.go:494] detecting cgroup driver to use...
	I0429 18:43:59.161178    5088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 18:43:59.209498    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 18:43:59.256570    5088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 18:43:59.310129    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 18:43:59.353852    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 18:43:59.393446    5088 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 18:43:59.460388    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 18:43:59.490658    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 18:43:59.548198    5088 ssh_runner.go:195] Run: which cri-dockerd
	I0429 18:43:59.568874    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 18:43:59.591118    5088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 18:43:59.646635    5088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 18:43:59.914545    5088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 18:44:00.125573    5088 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 18:44:00.125918    5088 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 18:44:00.177747    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:44:00.402989    5088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 18:44:02.966397    5088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5633894s)
	I0429 18:44:02.980676    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 18:44:03.024680    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 18:44:03.066192    5088 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 18:44:03.292872    5088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 18:44:03.502878    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:44:03.707924    5088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 18:44:03.755324    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 18:44:03.803959    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:44:04.030827    5088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 18:44:04.167863    5088 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 18:44:04.184095    5088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 18:44:04.193117    5088 start.go:562] Will wait 60s for crictl version
	I0429 18:44:04.204662    5088 ssh_runner.go:195] Run: which crictl
	I0429 18:44:04.233404    5088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 18:44:04.294269    5088 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 18:44:04.305589    5088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 18:44:04.348775    5088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 18:44:04.381371    5088 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 18:44:04.381653    5088 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 18:44:04.386981    5088 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 18:44:04.387039    5088 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 18:44:04.387092    5088 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 18:44:04.387092    5088 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 18:44:04.390612    5088 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 18:44:04.390612    5088 ip.go:210] interface addr: 172.17.240.1/20
	I0429 18:44:04.403962    5088 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 18:44:04.411534    5088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 18:44:04.435144    5088 kubeadm.go:877] updating cluster {Name:addons-442400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:addons-442400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.248.23 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 18:44:04.435144    5088 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 18:44:04.446329    5088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 18:44:04.469264    5088 docker.go:685] Got preloaded images: 
	I0429 18:44:04.469264    5088 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 18:44:04.483749    5088 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 18:44:04.517249    5088 ssh_runner.go:195] Run: which lz4
	I0429 18:44:04.538505    5088 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 18:44:04.544350    5088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 18:44:04.544350    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 18:44:06.434850    5088 docker.go:649] duration metric: took 1.9104184s to copy over tarball
	I0429 18:44:06.448713    5088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 18:44:11.802872    5088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.3541196s)
	I0429 18:44:11.802987    5088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 18:44:11.869778    5088 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 18:44:11.890324    5088 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 18:44:11.947159    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:44:12.174237    5088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 18:44:17.797740    5088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6234624s)
	I0429 18:44:17.810321    5088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 18:44:17.835801    5088 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 18:44:17.835941    5088 cache_images.go:84] Images are preloaded, skipping loading
	I0429 18:44:17.835941    5088 kubeadm.go:928] updating node { 172.17.248.23 8443 v1.30.0 docker true true} ...
	I0429 18:44:17.835941    5088 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-442400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.248.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-442400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 18:44:17.847154    5088 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 18:44:17.886972    5088 cni.go:84] Creating CNI manager for ""
	I0429 18:44:17.887025    5088 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 18:44:17.887085    5088 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 18:44:17.887142    5088 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.248.23 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-442400 NodeName:addons-442400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.248.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.248.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 18:44:17.887406    5088 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.248.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-442400"
	  kubeletExtraArgs:
	    node-ip: 172.17.248.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.248.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 18:44:17.902189    5088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 18:44:17.922837    5088 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 18:44:17.937197    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 18:44:17.958473    5088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0429 18:44:17.994560    5088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 18:44:18.029677    5088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0429 18:44:18.077565    5088 ssh_runner.go:195] Run: grep 172.17.248.23	control-plane.minikube.internal$ /etc/hosts
	I0429 18:44:18.083938    5088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.248.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 18:44:18.121242    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:44:18.344154    5088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 18:44:18.376143    5088 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400 for IP: 172.17.248.23
	I0429 18:44:18.376143    5088 certs.go:194] generating shared ca certs ...
	I0429 18:44:18.376143    5088 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:18.376864    5088 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 18:44:18.575592    5088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0429 18:44:18.575592    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:18.577481    5088 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0429 18:44:18.577481    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:18.578483    5088 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 18:44:18.979798    5088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0429 18:44:18.979798    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:18.980350    5088 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0429 18:44:18.981353    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:18.982356    5088 certs.go:256] generating profile certs ...
	I0429 18:44:18.982571    5088 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.key
	I0429 18:44:18.982571    5088 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt with IP's: []
	I0429 18:44:19.111256    5088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt ...
	I0429 18:44:19.111256    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: {Name:mk08c8f4b3e217d345d7601e3878cf4ae2580086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:19.113249    5088 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.key ...
	I0429 18:44:19.113249    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.key: {Name:mkad4dd439cc314249fdfe42159686150f32a1ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:19.113493    5088 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.key.99d34da9
	I0429 18:44:19.114478    5088 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.crt.99d34da9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.248.23]
	I0429 18:44:19.353239    5088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.crt.99d34da9 ...
	I0429 18:44:19.353239    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.crt.99d34da9: {Name:mkc1440a8be4b0d6ff25c3c4e7c49decfa762749 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:19.353851    5088 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.key.99d34da9 ...
	I0429 18:44:19.353851    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.key.99d34da9: {Name:mk87c8ce620437e1ffb16841b004c258f7687dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:19.355382    5088 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.crt.99d34da9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.crt
	I0429 18:44:19.366368    5088 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.key.99d34da9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.key
	I0429 18:44:19.368689    5088 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\proxy-client.key
	I0429 18:44:19.368689    5088 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\proxy-client.crt with IP's: []
	I0429 18:44:19.810549    5088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\proxy-client.crt ...
	I0429 18:44:19.810549    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\proxy-client.crt: {Name:mk3b2ddc47697e936c2d2730c236b2af5d1abad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:19.811940    5088 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\proxy-client.key ...
	I0429 18:44:19.811940    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\proxy-client.key: {Name:mk807da4c736f8837bacf7be4d4b1610a6056a25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:19.823931    5088 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 18:44:19.823931    5088 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 18:44:19.824940    5088 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 18:44:19.824940    5088 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 18:44:19.826942    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 18:44:19.882249    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 18:44:19.930653    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 18:44:19.984776    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 18:44:20.035493    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 18:44:20.080499    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 18:44:20.133081    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 18:44:20.187783    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 18:44:20.238438    5088 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 18:44:20.297192    5088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 18:44:20.360551    5088 ssh_runner.go:195] Run: openssl version
	I0429 18:44:20.386838    5088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 18:44:20.423224    5088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:44:20.431549    5088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:44:20.446194    5088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:44:20.470540    5088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 18:44:20.506033    5088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 18:44:20.515427    5088 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 18:44:20.518940    5088 kubeadm.go:391] StartCluster: {Name:addons-442400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:addons-442400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.248.23 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:44:20.530050    5088 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 18:44:20.573222    5088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 18:44:20.608719    5088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 18:44:20.642093    5088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 18:44:20.661124    5088 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 18:44:20.661124    5088 kubeadm.go:156] found existing configuration files:
	
	I0429 18:44:20.676236    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 18:44:20.696499    5088 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 18:44:20.710871    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 18:44:20.743165    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 18:44:20.759777    5088 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 18:44:20.773446    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 18:44:20.807634    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 18:44:20.828642    5088 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 18:44:20.841992    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 18:44:20.877326    5088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 18:44:20.896677    5088 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 18:44:20.910821    5088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 18:44:20.932095    5088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 18:44:21.189189    5088 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 18:44:35.072905    5088 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 18:44:35.072994    5088 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 18:44:35.073245    5088 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 18:44:35.073416    5088 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 18:44:35.073645    5088 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 18:44:35.073645    5088 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 18:44:35.076810    5088 out.go:204]   - Generating certificates and keys ...
	I0429 18:44:35.077138    5088 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 18:44:35.077287    5088 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 18:44:35.077287    5088 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 18:44:35.077545    5088 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 18:44:35.077716    5088 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 18:44:35.077917    5088 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 18:44:35.078099    5088 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 18:44:35.078379    5088 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-442400 localhost] and IPs [172.17.248.23 127.0.0.1 ::1]
	I0429 18:44:35.078518    5088 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 18:44:35.078861    5088 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-442400 localhost] and IPs [172.17.248.23 127.0.0.1 ::1]
	I0429 18:44:35.078861    5088 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 18:44:35.078861    5088 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 18:44:35.078861    5088 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 18:44:35.079394    5088 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 18:44:35.079588    5088 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 18:44:35.079760    5088 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 18:44:35.079834    5088 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 18:44:35.080034    5088 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 18:44:35.080133    5088 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 18:44:35.080317    5088 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 18:44:35.080317    5088 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 18:44:35.083226    5088 out.go:204]   - Booting up control plane ...
	I0429 18:44:35.083226    5088 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 18:44:35.083226    5088 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 18:44:35.083814    5088 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 18:44:35.083814    5088 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 18:44:35.084452    5088 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 18:44:35.084452    5088 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 18:44:35.084783    5088 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 18:44:35.085012    5088 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 18:44:35.085159    5088 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002311752s
	I0429 18:44:35.085203    5088 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 18:44:35.085203    5088 kubeadm.go:309] [api-check] The API server is healthy after 7.502719743s
	I0429 18:44:35.085203    5088 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 18:44:35.085905    5088 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 18:44:35.086311    5088 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 18:44:35.086576    5088 kubeadm.go:309] [mark-control-plane] Marking the node addons-442400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 18:44:35.086820    5088 kubeadm.go:309] [bootstrap-token] Using token: 90tzvl.5yye7lxa8a87sct2
	I0429 18:44:35.090642    5088 out.go:204]   - Configuring RBAC rules ...
	I0429 18:44:35.091640    5088 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 18:44:35.091937    5088 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 18:44:35.092097    5088 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 18:44:35.092484    5088 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 18:44:35.092727    5088 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 18:44:35.092727    5088 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 18:44:35.093165    5088 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 18:44:35.093165    5088 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 18:44:35.093165    5088 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 18:44:35.093165    5088 kubeadm.go:309] 
	I0429 18:44:35.093165    5088 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 18:44:35.093165    5088 kubeadm.go:309] 
	I0429 18:44:35.093729    5088 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 18:44:35.093865    5088 kubeadm.go:309] 
	I0429 18:44:35.093948    5088 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 18:44:35.094187    5088 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 18:44:35.094187    5088 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 18:44:35.094187    5088 kubeadm.go:309] 
	I0429 18:44:35.094502    5088 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 18:44:35.094538    5088 kubeadm.go:309] 
	I0429 18:44:35.094714    5088 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 18:44:35.094747    5088 kubeadm.go:309] 
	I0429 18:44:35.094747    5088 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 18:44:35.095130    5088 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 18:44:35.095376    5088 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 18:44:35.095420    5088 kubeadm.go:309] 
	I0429 18:44:35.095449    5088 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 18:44:35.095449    5088 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 18:44:35.095449    5088 kubeadm.go:309] 
	I0429 18:44:35.096016    5088 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 90tzvl.5yye7lxa8a87sct2 \
	I0429 18:44:35.096254    5088 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 18:44:35.096356    5088 kubeadm.go:309] 	--control-plane 
	I0429 18:44:35.096356    5088 kubeadm.go:309] 
	I0429 18:44:35.096385    5088 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 18:44:35.096385    5088 kubeadm.go:309] 
	I0429 18:44:35.096385    5088 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 90tzvl.5yye7lxa8a87sct2 \
	I0429 18:44:35.096950    5088 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 18:44:35.096950    5088 cni.go:84] Creating CNI manager for ""
	I0429 18:44:35.097046    5088 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 18:44:35.100568    5088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 18:44:35.117525    5088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 18:44:35.139285    5088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 18:44:35.177091    5088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 18:44:35.193633    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-442400 minikube.k8s.io/updated_at=2024_04_29T18_44_35_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=addons-442400 minikube.k8s.io/primary=true
	I0429 18:44:35.195003    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:35.203631    5088 ops.go:34] apiserver oom_adj: -16
	I0429 18:44:35.386743    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:35.893805    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:36.394708    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:36.897360    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:37.397313    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:37.899833    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:38.386769    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:38.891006    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:39.395504    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:39.896543    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:40.400240    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:40.888055    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:41.391343    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:41.894397    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:42.395641    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:42.899159    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:43.400656    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:43.902133    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:44.392562    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:44.901546    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:45.385995    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:45.891919    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:46.401069    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:46.889050    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:47.392256    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:47.895762    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:48.399718    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:48.891786    5088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:44:49.015603    5088 kubeadm.go:1107] duration metric: took 13.838412s to wait for elevateKubeSystemPrivileges
	W0429 18:44:49.015675    5088 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 18:44:49.015802    5088 kubeadm.go:393] duration metric: took 28.4966557s to StartCluster
	I0429 18:44:49.015874    5088 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:49.016050    5088 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 18:44:49.018417    5088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:44:49.020303    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 18:44:49.020303    5088 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.248.23 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 18:44:49.024192    5088 out.go:177] * Verifying Kubernetes components...
	I0429 18:44:49.020562    5088 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 18:44:49.020890    5088 config.go:182] Loaded profile config "addons-442400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 18:44:49.028406    5088 addons.go:69] Setting cloud-spanner=true in profile "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:69] Setting yakd=true in profile "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:69] Setting gcp-auth=true in profile "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:69] Setting metrics-server=true in profile "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:234] Setting addon metrics-server=true in "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:69] Setting storage-provisioner=true in profile "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:234] Setting addon storage-provisioner=true in "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-442400"
	I0429 18:44:49.028731    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.028731    5088 addons.go:69] Setting volumesnapshots=true in profile "addons-442400"
	I0429 18:44:49.028731    5088 addons.go:234] Setting addon volumesnapshots=true in "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:234] Setting addon yakd=true in "addons-442400"
	I0429 18:44:49.028843    5088 addons.go:69] Setting registry=true in profile "addons-442400"
	I0429 18:44:49.028953    5088 addons.go:234] Setting addon registry=true in "addons-442400"
	I0429 18:44:49.028953    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.028731    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.028843    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.028406    5088 mustload.go:65] Loading cluster: addons-442400
	I0429 18:44:49.028953    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.028406    5088 addons.go:234] Setting addon cloud-spanner=true in "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:69] Setting inspektor-gadget=true in profile "addons-442400"
	I0429 18:44:49.029690    5088 addons.go:234] Setting addon inspektor-gadget=true in "addons-442400"
	I0429 18:44:49.029690    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.029690    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.028406    5088 addons.go:69] Setting helm-tiller=true in profile "addons-442400"
	I0429 18:44:49.029690    5088 addons.go:234] Setting addon helm-tiller=true in "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:69] Setting ingress=true in profile "addons-442400"
	I0429 18:44:49.030675    5088 addons.go:234] Setting addon ingress=true in "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:69] Setting ingress-dns=true in profile "addons-442400"
	I0429 18:44:49.030675    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.030675    5088 addons.go:234] Setting addon ingress-dns=true in "addons-442400"
	I0429 18:44:49.028731    5088 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-442400"
	I0429 18:44:49.028731    5088 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-442400"
	I0429 18:44:49.028406    5088 addons.go:69] Setting default-storageclass=true in profile "addons-442400"
	I0429 18:44:49.028953    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.029690    5088 config.go:182] Loaded profile config "addons-442400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 18:44:49.030675    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.032730    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.030675    5088 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-442400"
	I0429 18:44:49.030675    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.034680    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.035694    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.030675    5088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-442400"
	I0429 18:44:49.037681    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.037681    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.037681    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.037681    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.030675    5088 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-442400"
	I0429 18:44:49.037681    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.037681    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:49.037681    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.037681    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.038690    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.039684    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.039684    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.039684    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.037681    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:49.059349    5088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:44:50.815397    5088 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.7560357s)
	I0429 18:44:50.838291    5088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 18:44:50.838935    5088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.8186189s)
	I0429 18:44:50.839220    5088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 18:44:52.294439    5088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.455209s)
	I0429 18:44:52.294439    5088 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 18:44:52.296661    5088 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.4583598s)
	I0429 18:44:52.306325    5088 node_ready.go:35] waiting up to 6m0s for node "addons-442400" to be "Ready" ...
	I0429 18:44:53.025002    5088 node_ready.go:49] node "addons-442400" has status "Ready":"True"
	I0429 18:44:53.025002    5088 node_ready.go:38] duration metric: took 718.672ms for node "addons-442400" to be "Ready" ...
	I0429 18:44:53.025002    5088 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 18:44:53.537461    5088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gsjpj" in "kube-system" namespace to be "Ready" ...
	I0429 18:44:53.919412    5088 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-442400" context rescaled to 1 replicas
	I0429 18:44:55.565477    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.565477    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.569592    5088 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 18:44:55.568376    5088 pod_ready.go:102] pod "coredns-7db6d8ff4d-gsjpj" in "kube-system" namespace has status "Ready":"False"
	I0429 18:44:55.572947    5088 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 18:44:55.576429    5088 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 18:44:55.576429    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 18:44:55.576539    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:55.584517    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.584517    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.587518    5088 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0429 18:44:55.590516    5088 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 18:44:55.590516    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 18:44:55.590516    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:55.613484    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.613484    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.614817    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.614817    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.616675    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.616675    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.620501    5088 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 18:44:55.617502    5088 addons.go:234] Setting addon default-storageclass=true in "addons-442400"
	I0429 18:44:55.619497    5088 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-442400"
	I0429 18:44:55.623475    5088 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 18:44:55.623475    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 18:44:55.623475    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:55.623475    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:55.623475    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:55.624499    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:55.625491    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:55.923513    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.923513    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.942535    5088 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 18:44:55.955123    5088 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 18:44:55.955123    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 18:44:55.955123    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:55.956591    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.956591    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.960596    5088 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 18:44:55.968758    5088 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 18:44:55.968758    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 18:44:55.968758    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:55.976984    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.976984    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.979878    5088 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0429 18:44:55.977299    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.978206    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.985736    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.990369    5088 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0429 18:44:55.985736    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.985736    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.985736    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.987783    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.990369    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.990369    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:44:55.993917    5088 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 18:44:55.994123    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.994201    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.995730    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:55.998157    5088 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0429 18:44:55.998244    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:55.998244    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0429 18:44:56.000869    5088 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 18:44:55.998330    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:56.005365    5088 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 18:44:56.009084    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:44:56.009084    5088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 18:44:56.010057    5088 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 18:44:56.010057    5088 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 18:44:56.014072    5088 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 18:44:56.014072    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 18:44:56.015055    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:56.012055    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:44:56.012055    5088 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 18:44:56.020311    5088 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 18:44:56.026207    5088 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 18:44:56.034216    5088 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 18:44:56.037218    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 18:44:56.037218    5088 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 18:44:56.040221    5088 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 18:44:56.041220    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 18:44:56.041220    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 18:44:56.041220    5088 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 18:44:56.041220    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:56.044220    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:56.045220    5088 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 18:44:56.045220    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:56.045220    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 18:44:56.058230    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:56.061227    5088 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 18:44:56.070212    5088 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 18:44:56.082150    5088 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 18:44:56.086082    5088 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 18:44:56.089076    5088 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 18:44:56.089076    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 18:44:56.089076    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:44:58.089184    5088 pod_ready.go:102] pod "coredns-7db6d8ff4d-gsjpj" in "kube-system" namespace has status "Ready":"False"
	I0429 18:45:00.272080    5088 pod_ready.go:102] pod "coredns-7db6d8ff4d-gsjpj" in "kube-system" namespace has status "Ready":"False"
	I0429 18:45:01.711361    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:01.711361    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:01.711361    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:01.739335    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:01.739335    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:01.743548    5088 out.go:177]   - Using image docker.io/busybox:stable
	I0429 18:45:01.746906    5088 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 18:45:01.749639    5088 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 18:45:01.749639    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 18:45:01.749639    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:45:01.753097    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:01.753097    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:01.753644    5088 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 18:45:01.753644    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 18:45:01.753729    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:45:01.757891    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:01.757891    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:01.757891    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:01.766563    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:01.766563    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:01.766563    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:01.902491    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:01.902491    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:01.902491    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:01.959564    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:01.959564    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:01.959564    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:02.017366    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:02.019206    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:02.019206    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:02.035555    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:02.035555    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:02.035555    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:02.470068    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:02.470068    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:02.470068    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:02.520369    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:02.520428    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:02.520482    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:02.526368    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:02.526368    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:02.526368    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:02.537880    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:02.537880    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:02.585977    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:02.585977    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:02.585977    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:02.768939    5088 pod_ready.go:102] pod "coredns-7db6d8ff4d-gsjpj" in "kube-system" namespace has status "Ready":"False"
	I0429 18:45:03.801081    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:03.821576    5088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 18:45:03.821576    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:45:04.253637    5088 pod_ready.go:92] pod "coredns-7db6d8ff4d-gsjpj" in "kube-system" namespace has status "Ready":"True"
	I0429 18:45:04.253637    5088 pod_ready.go:81] duration metric: took 10.7158796s for pod "coredns-7db6d8ff4d-gsjpj" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:04.253637    5088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sg4xf" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:05.717205    5088 pod_ready.go:92] pod "coredns-7db6d8ff4d-sg4xf" in "kube-system" namespace has status "Ready":"True"
	I0429 18:45:05.717205    5088 pod_ready.go:81] duration metric: took 1.4635573s for pod "coredns-7db6d8ff4d-sg4xf" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:05.717205    5088 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-442400" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:05.765205    5088 pod_ready.go:92] pod "etcd-addons-442400" in "kube-system" namespace has status "Ready":"True"
	I0429 18:45:05.765205    5088 pod_ready.go:81] duration metric: took 47.9994ms for pod "etcd-addons-442400" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:05.765205    5088 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-442400" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:06.004060    5088 pod_ready.go:92] pod "kube-apiserver-addons-442400" in "kube-system" namespace has status "Ready":"True"
	I0429 18:45:06.004060    5088 pod_ready.go:81] duration metric: took 238.853ms for pod "kube-apiserver-addons-442400" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:06.004060    5088 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-442400" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:06.059046    5088 pod_ready.go:92] pod "kube-controller-manager-addons-442400" in "kube-system" namespace has status "Ready":"True"
	I0429 18:45:06.059046    5088 pod_ready.go:81] duration metric: took 54.9858ms for pod "kube-controller-manager-addons-442400" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:06.059046    5088 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f8rwp" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:06.107052    5088 pod_ready.go:92] pod "kube-proxy-f8rwp" in "kube-system" namespace has status "Ready":"True"
	I0429 18:45:06.108046    5088 pod_ready.go:81] duration metric: took 49.0004ms for pod "kube-proxy-f8rwp" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:06.108046    5088 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-442400" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:06.150043    5088 pod_ready.go:92] pod "kube-scheduler-addons-442400" in "kube-system" namespace has status "Ready":"True"
	I0429 18:45:06.150043    5088 pod_ready.go:81] duration metric: took 41.9959ms for pod "kube-scheduler-addons-442400" in "kube-system" namespace to be "Ready" ...
	I0429 18:45:06.150043    5088 pod_ready.go:38] duration metric: took 13.1249463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 18:45:06.151054    5088 api_server.go:52] waiting for apiserver process to appear ...
	I0429 18:45:06.173060    5088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 18:45:06.341059    5088 api_server.go:72] duration metric: took 17.3205138s to wait for apiserver process to appear ...
	I0429 18:45:06.341059    5088 api_server.go:88] waiting for apiserver healthz status ...
	I0429 18:45:06.341059    5088 api_server.go:253] Checking apiserver healthz at https://172.17.248.23:8443/healthz ...
	I0429 18:45:06.373152    5088 api_server.go:279] https://172.17.248.23:8443/healthz returned 200:
	ok
	I0429 18:45:06.376061    5088 api_server.go:141] control plane version: v1.30.0
	I0429 18:45:06.376061    5088 api_server.go:131] duration metric: took 35.0022ms to wait for apiserver health ...
	I0429 18:45:06.376061    5088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 18:45:06.413448    5088 system_pods.go:59] 7 kube-system pods found
	I0429 18:45:06.413448    5088 system_pods.go:61] "coredns-7db6d8ff4d-gsjpj" [c468a9e6-2cef-465f-b1ea-edcafd1f0244] Running
	I0429 18:45:06.413448    5088 system_pods.go:61] "coredns-7db6d8ff4d-sg4xf" [8da3e6d7-7293-408f-9fbd-d0dcd1b34a3a] Running
	I0429 18:45:06.413448    5088 system_pods.go:61] "etcd-addons-442400" [104b2e61-3fbc-4319-8f76-f002df712887] Running
	I0429 18:45:06.413448    5088 system_pods.go:61] "kube-apiserver-addons-442400" [fb30937d-4e86-40b1-823f-6513c284f183] Running
	I0429 18:45:06.413448    5088 system_pods.go:61] "kube-controller-manager-addons-442400" [cc851f3d-25e1-4e56-b520-5880006a2572] Running
	I0429 18:45:06.413448    5088 system_pods.go:61] "kube-proxy-f8rwp" [4e6736c8-fc50-4819-be96-7511a35860ac] Running
	I0429 18:45:06.413448    5088 system_pods.go:61] "kube-scheduler-addons-442400" [c42e8109-6106-4ee3-9dac-73061dc3aae7] Running
	I0429 18:45:06.413448    5088 system_pods.go:74] duration metric: took 37.3861ms to wait for pod list to return data ...
	I0429 18:45:06.413448    5088 default_sa.go:34] waiting for default service account to be created ...
	I0429 18:45:06.429437    5088 default_sa.go:45] found service account: "default"
	I0429 18:45:06.429437    5088 default_sa.go:55] duration metric: took 15.9887ms for default service account to be created ...
	I0429 18:45:06.429437    5088 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 18:45:06.443452    5088 system_pods.go:86] 7 kube-system pods found
	I0429 18:45:06.443452    5088 system_pods.go:89] "coredns-7db6d8ff4d-gsjpj" [c468a9e6-2cef-465f-b1ea-edcafd1f0244] Running
	I0429 18:45:06.443452    5088 system_pods.go:89] "coredns-7db6d8ff4d-sg4xf" [8da3e6d7-7293-408f-9fbd-d0dcd1b34a3a] Running
	I0429 18:45:06.443452    5088 system_pods.go:89] "etcd-addons-442400" [104b2e61-3fbc-4319-8f76-f002df712887] Running
	I0429 18:45:06.443452    5088 system_pods.go:89] "kube-apiserver-addons-442400" [fb30937d-4e86-40b1-823f-6513c284f183] Running
	I0429 18:45:06.443452    5088 system_pods.go:89] "kube-controller-manager-addons-442400" [cc851f3d-25e1-4e56-b520-5880006a2572] Running
	I0429 18:45:06.443452    5088 system_pods.go:89] "kube-proxy-f8rwp" [4e6736c8-fc50-4819-be96-7511a35860ac] Running
	I0429 18:45:06.443452    5088 system_pods.go:89] "kube-scheduler-addons-442400" [c42e8109-6106-4ee3-9dac-73061dc3aae7] Running
	I0429 18:45:06.443452    5088 system_pods.go:126] duration metric: took 14.0156ms to wait for k8s-apps to be running ...
	I0429 18:45:06.443452    5088 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 18:45:06.463448    5088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 18:45:06.529322    5088 system_svc.go:56] duration metric: took 85.8694ms WaitForService to wait for kubelet
	I0429 18:45:06.529322    5088 kubeadm.go:576] duration metric: took 17.5087758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 18:45:06.529322    5088 node_conditions.go:102] verifying NodePressure condition ...
	I0429 18:45:06.535336    5088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 18:45:06.536333    5088 node_conditions.go:123] node cpu capacity is 2
	I0429 18:45:06.536333    5088 node_conditions.go:105] duration metric: took 7.0109ms to run NodePressure ...
	I0429 18:45:06.536333    5088 start.go:240] waiting for startup goroutines ...
	I0429 18:45:07.945480    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:07.945563    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:07.945563    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:08.127315    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:08.128268    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:08.128268    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:08.961422    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:08.961422    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:08.962331    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.089144    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:09.089144    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.090143    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.233180    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:09.233180    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.233180    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.309724    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:09.309724    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.309724    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.367040    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:09.371453    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.371861    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.452012    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:09.452012    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.453001    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.485561    5088 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 18:45:09.485561    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 18:45:09.508212    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:09.508212    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.509221    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.557225    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 18:45:09.584534    5088 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 18:45:09.584534    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 18:45:09.619514    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:09.619514    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.620129    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.690636    5088 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 18:45:09.690636    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 18:45:09.741107    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:09.741295    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.742110    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.803455    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:09.803540    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.804475    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.827578    5088 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 18:45:09.827578    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 18:45:09.874644    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 18:45:09.885649    5088 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 18:45:09.885649    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 18:45:09.888643    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:09.888643    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.889639    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:09.893640    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 18:45:09.924401    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:09.924503    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:09.924653    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:09.953282    5088 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 18:45:09.953370    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 18:45:10.074599    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 18:45:10.074599    5088 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 18:45:10.075595    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 18:45:10.136268    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 18:45:10.148746    5088 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 18:45:10.148746    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 18:45:10.290606    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 18:45:10.357067    5088 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 18:45:10.357067    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 18:45:10.358052    5088 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 18:45:10.358052    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 18:45:10.397060    5088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 18:45:10.397060    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 18:45:10.432056    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:10.432056    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:10.433059    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:10.479658    5088 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 18:45:10.479658    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 18:45:10.531377    5088 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0429 18:45:10.531447    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0429 18:45:10.679681    5088 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 18:45:10.679921    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 18:45:10.756555    5088 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 18:45:10.756640    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 18:45:10.763787    5088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 18:45:10.763876    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 18:45:10.875221    5088 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 18:45:10.875221    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 18:45:10.890448    5088 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 18:45:10.890582    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0429 18:45:10.991573    5088 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 18:45:10.991713    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 18:45:11.037541    5088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 18:45:11.037610    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 18:45:11.044642    5088 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 18:45:11.044642    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 18:45:11.104903    5088 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 18:45:11.105005    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 18:45:11.118065    5088 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 18:45:11.119640    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 18:45:11.204083    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 18:45:11.260591    5088 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 18:45:11.260591    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 18:45:11.304019    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 18:45:11.386484    5088 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 18:45:11.386598    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 18:45:11.426469    5088 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 18:45:11.426527    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 18:45:11.547931    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 18:45:11.549295    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:11.549295    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:11.550020    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:11.640963    5088 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 18:45:11.641037    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 18:45:11.659627    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:11.660469    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:11.661211    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:11.741301    5088 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 18:45:11.741301    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 18:45:11.803746    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 18:45:12.019776    5088 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 18:45:12.019776    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 18:45:12.190600    5088 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 18:45:12.190600    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 18:45:12.348645    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 18:45:12.378134    5088 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 18:45:12.378134    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 18:45:12.493574    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 18:45:12.528826    5088 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 18:45:12.528919    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 18:45:12.775231    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:12.775231    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:12.775306    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:12.916428    5088 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 18:45:12.916518    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 18:45:13.157585    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 18:45:13.966829    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 18:45:14.616199    5088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 18:45:15.500690    5088 addons.go:234] Setting addon gcp-auth=true in "addons-442400"
	I0429 18:45:15.500690    5088 host.go:66] Checking if "addons-442400" exists ...
	I0429 18:45:15.502039    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:45:17.850595    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:17.850666    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:17.871211    5088 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 18:45:17.871211    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-442400 ).state
	I0429 18:45:20.328850    5088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 18:45:20.329845    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:20.329845    5088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-442400 ).networkadapters[0]).ipaddresses[0]
	I0429 18:45:23.122590    5088 main.go:141] libmachine: [stdout =====>] : 172.17.248.23
	
	I0429 18:45:23.122590    5088 main.go:141] libmachine: [stderr =====>] : 
	I0429 18:45:23.123594    5088 sshutil.go:53] new ssh client: &{IP:172.17.248.23 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-442400\id_rsa Username:docker}
	I0429 18:45:23.657153    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (14.0988421s)
	I0429 18:45:23.657281    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.7825376s)
	I0429 18:45:23.657363    5088 addons.go:470] Verifying addon ingress=true in "addons-442400"
	I0429 18:45:23.657363    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.7636238s)
	I0429 18:45:23.657363    5088 addons.go:470] Verifying addon registry=true in "addons-442400"
	I0429 18:45:23.657569    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.5828717s)
	I0429 18:45:23.657697    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (13.5213316s)
	I0429 18:45:23.663858    5088 out.go:177] * Verifying ingress addon...
	I0429 18:45:23.657728    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (13.3668567s)
	I0429 18:45:23.657811    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (12.4536383s)
	I0429 18:45:23.657871    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.3537631s)
	I0429 18:45:23.658096    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.1100138s)
	I0429 18:45:23.664872    5088 addons.go:470] Verifying addon metrics-server=true in "addons-442400"
	I0429 18:45:23.658220    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.8543888s)
	I0429 18:45:23.658220    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.3094933s)
	I0429 18:45:23.658388    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.1647329s)
	I0429 18:45:23.658540    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.5008797s)
	I0429 18:45:23.660936    5088 out.go:177] * Verifying registry addon...
	I0429 18:45:23.668872    5088 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-442400 service yakd-dashboard -n yakd-dashboard
	
	W0429 18:45:23.664872    5088 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 18:45:23.668872    5088 retry.go:31] will retry after 140.76637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 18:45:23.669859    5088 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0429 18:45:23.669859    5088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 18:45:23.690888    5088 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 18:45:23.690888    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:23.705014    5088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 18:45:23.705014    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0429 18:45:23.723642    5088 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0429 18:45:23.839205    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 18:45:24.193065    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:24.199693    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:24.700934    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:24.716633    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:24.770891    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.8039843s)
	I0429 18:45:24.770891    5088 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-442400"
	I0429 18:45:24.770891    5088 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.8996304s)
	I0429 18:45:24.777501    5088 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 18:45:24.779948    5088 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 18:45:24.782939    5088 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 18:45:24.784933    5088 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 18:45:24.784933    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 18:45:24.784933    5088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 18:45:24.809508    5088 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 18:45:24.809566    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:25.098505    5088 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 18:45:25.098505    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 18:45:25.189963    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:25.206944    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:25.279301    5088 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 18:45:25.279301    5088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 18:45:25.312343    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:25.387684    5088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 18:45:25.678543    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:25.683692    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:25.810627    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:26.187943    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:26.195117    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:26.305292    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:26.691981    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:26.693988    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:26.716167    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.8769408s)
	I0429 18:45:26.805419    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:27.211660    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:27.221351    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:27.326884    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:27.694749    5088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.3070481s)
	I0429 18:45:27.703179    5088 addons.go:470] Verifying addon gcp-auth=true in "addons-442400"
	I0429 18:45:27.707545    5088 out.go:177] * Verifying gcp-auth addon...
	I0429 18:45:27.713060    5088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 18:45:27.762934    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:27.804131    5088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 18:45:27.804278    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:27.805719    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:27.935297    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:28.196557    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:28.201371    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:28.221204    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:28.309125    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:28.685427    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:28.692370    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:28.734894    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:28.797359    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:29.188996    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:29.193937    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:29.225919    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:29.305147    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:29.680145    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:29.681163    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:29.726435    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:29.807649    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:30.189490    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:30.189490    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:30.235497    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:30.299732    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:30.689526    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:30.689767    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:30.722606    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:30.803693    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:31.186120    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:31.186120    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:31.231848    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:31.294760    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:31.691953    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:31.692480    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:31.720740    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:31.802474    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:32.180631    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:32.181086    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:32.224637    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:32.307048    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:32.685463    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:32.685839    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:32.730475    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:32.800221    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:33.185272    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:33.189997    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:33.230216    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:33.296812    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:33.693647    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:33.694074    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:33.720777    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:33.805651    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:34.182408    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:34.183005    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:34.218982    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:34.301054    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:34.679653    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:34.681475    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:34.723673    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:34.806924    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:35.186043    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:35.186687    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:35.229103    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:35.296088    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:35.692127    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:35.694096    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:35.719982    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:35.805179    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:36.182573    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:36.183587    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:36.227126    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:36.297077    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:36.689279    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:36.692467    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:36.716979    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:36.802356    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:37.182245    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:37.185067    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:37.226730    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:37.295266    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:37.686271    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:37.689399    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:37.731346    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:37.799364    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:38.194147    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:38.195157    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:38.222262    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:38.306402    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:38.689617    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:38.689756    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:38.718560    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:39.175733    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:39.180645    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:39.182618    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:39.233089    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:39.298459    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:39.694618    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:39.696197    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:39.722641    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:39.803292    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:40.181978    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:40.182757    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:40.225429    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:40.308226    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:40.687988    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:40.688311    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:41.715075    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:41.720523    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:41.722547    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:41.723506    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:42.119416    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:42.120195    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:42.126545    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:42.128197    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:42.135378    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:42.183387    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:42.184358    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:42.325509    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:42.331418    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:42.692770    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:42.696568    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:42.720572    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:42.802931    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:43.179924    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:43.185300    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:43.225009    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:43.307852    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:43.688108    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:43.688387    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:43.730004    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:43.799048    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:44.197343    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:44.197707    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:44.224361    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:44.315027    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:44.688003    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:44.689663    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:44.732866    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:44.796780    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:45.178131    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:45.178481    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:45.224041    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:45.305501    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:45.689488    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:45.690829    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:45.731609    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:45.798904    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:46.192619    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:46.194869    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:46.223723    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:46.306721    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:46.685926    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:46.686081    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:46.729145    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:46.794799    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:47.191070    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:47.192100    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:47.223641    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:47.303010    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:47.679329    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:47.680365    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:47.722899    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:47.803353    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:48.185372    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:48.192550    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:48.232932    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:48.298310    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:48.688863    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:48.694926    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:48.726191    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:48.792558    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:49.194173    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:49.194173    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:49.219545    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:49.305019    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:49.680836    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:49.681228    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:49.726483    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:49.813912    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:50.514212    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:50.515036    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:50.515757    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:50.519014    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:50.869768    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:50.873271    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:50.873638    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:50.874922    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:51.209615    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:51.210159    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:51.231634    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:51.297003    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:51.691328    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:51.692453    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:51.731041    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:51.796635    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:52.190623    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:52.195530    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:52.219759    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:52.309176    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:52.689598    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:52.692295    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:52.731480    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:52.799365    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:53.181683    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:53.182101    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:53.225560    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:53.309238    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:53.686053    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:53.686620    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:53.738020    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:53.797827    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:54.186253    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:54.187927    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:54.231362    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:54.302801    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:54.680633    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:54.681266    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:54.728636    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:54.807182    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:55.223848    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:55.226669    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:55.228708    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:55.302946    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:55.694207    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:55.698438    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:55.722261    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:55.804060    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:56.181865    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:56.185838    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:56.227936    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:56.294937    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:56.692441    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:56.694430    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:56.722050    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:56.805670    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:57.184329    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:57.187909    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:57.228678    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:57.295658    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:57.692421    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:57.698442    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:57.721981    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:57.804359    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:58.187452    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:58.187452    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:58.230860    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:58.298783    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:58.690460    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:58.694284    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:58.719796    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:58.805054    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:59.182960    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:59.182960    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:59.223499    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:59.294044    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:45:59.688924    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:45:59.689916    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:45:59.734791    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:45:59.800490    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:00.186886    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:00.187161    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:00.225144    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:00.304456    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:00.679853    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:00.680442    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:00.724006    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:00.802051    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:01.191112    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:01.194116    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:01.219436    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:01.298631    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:01.694211    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:01.694674    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:01.718940    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:01.802775    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:02.186065    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:02.187061    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:02.229685    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:02.294338    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:02.689871    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:02.689871    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:02.718215    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:02.802361    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:03.184555    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:03.186581    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:03.227761    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:03.294775    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:03.692308    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:03.692308    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:03.720119    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:03.804064    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:04.182905    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:04.183115    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:04.225497    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:04.306993    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:04.686423    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:04.686993    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:04.731593    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:04.797362    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:05.194147    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:05.194579    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:05.221771    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:05.306514    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:05.682614    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:05.682814    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:05.725184    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:05.809166    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:06.193704    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:06.195678    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:06.218530    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:06.309192    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:06.679831    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:06.680139    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:06.724642    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:06.804460    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:07.185223    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:07.185443    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:07.229409    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:07.297033    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:07.699479    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:07.701838    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:07.736701    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:07.828152    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:08.185132    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:08.185480    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:08.228690    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:08.296847    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:08.689790    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:08.690083    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:08.731554    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:08.798558    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:09.180917    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:09.181106    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:09.232896    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:09.298696    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:09.682525    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:09.683002    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:09.726531    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:09.811806    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:10.192327    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:10.192327    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:10.219536    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:10.308137    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:10.688834    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:10.689509    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:10.734788    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:10.802455    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:11.371718    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:11.372154    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:11.372651    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:11.378647    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:11.691256    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:11.691320    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:11.720229    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:11.803641    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:12.185356    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:12.188284    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:12.232315    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:12.297534    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:12.692807    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:12.693785    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:12.721957    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:12.805683    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:13.190479    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:13.190838    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:13.227955    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:13.295803    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:13.688320    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:13.689167    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:13.731653    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:13.802979    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:14.179304    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:14.179629    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:14.224492    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:14.305952    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:14.687443    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:14.688461    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:14.732325    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:14.798985    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:15.196499    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:15.196934    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:15.223087    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:15.307637    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:15.682423    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:15.690808    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:15.727195    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:15.807607    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:16.180399    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:16.180641    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:16.225046    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:16.314491    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:16.714154    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:16.714431    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:16.719435    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:16.928926    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:17.411499    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:17.415357    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:17.415701    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:17.416549    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:17.685280    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:17.686876    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:17.729425    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:17.796761    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:18.312291    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:18.323934    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:18.332390    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:18.353994    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:18.753679    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:18.754293    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:18.763255    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:18.848399    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:19.206699    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:19.206905    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:19.224807    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:19.306624    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:19.685751    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:19.686025    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:19.728665    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:19.808344    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:20.186589    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:20.187827    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:20.230227    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:20.297829    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:20.681865    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:20.683191    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:20.724571    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:20.807551    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:21.185144    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:21.185144    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:21.229960    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:21.295606    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:21.692464    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:21.692902    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:21.721254    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:21.803638    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:22.185476    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:22.188054    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:22.229820    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:22.296223    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:22.687786    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:22.688251    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:22.730340    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:22.797792    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:23.193281    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:23.196885    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:23.222459    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:23.304940    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:23.687330    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:23.688325    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:23.732337    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:23.797506    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:24.196248    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:24.196565    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:24.221861    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:24.306758    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:24.692051    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:24.692567    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:24.733515    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:24.799236    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:25.185319    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:25.186251    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:25.228244    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:25.293255    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:25.691842    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:25.691842    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:25.720133    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:25.803098    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:26.183831    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:26.183831    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:26.227692    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:26.295168    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:26.692280    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:26.694812    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:26.733293    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:26.800317    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:27.179909    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:27.180219    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:27.224620    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:27.308556    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:27.719047    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:27.722981    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:27.733622    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:27.809787    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:28.189790    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:28.191848    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:28.218040    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:28.301703    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:28.688558    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:28.694097    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:28.732030    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:28.801602    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:29.196725    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:29.200445    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:29.230799    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:29.315821    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:29.686660    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:29.687208    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:29.730331    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:29.804670    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:30.193457    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:30.196286    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:31.064612    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:31.066557    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:31.068238    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:31.070926    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:31.073120    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:31.080910    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:31.183080    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:31.186162    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:31.231269    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:31.307924    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:31.817154    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:31.821650    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:31.824932    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:31.830155    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:32.183137    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:46:32.185122    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:32.230510    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:32.296243    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:32.686287    5088 kapi.go:107] duration metric: took 1m9.0159341s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 18:46:32.688282    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:32.733347    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:32.797219    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:33.188058    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:33.234993    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:33.301035    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:33.698513    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:33.722514    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:33.809344    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:34.183921    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:34.229472    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:34.303565    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:34.688430    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:34.733574    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:34.801568    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:35.181184    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:35.225480    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:35.307887    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:35.688703    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:35.733832    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:35.802180    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:36.178692    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:36.226161    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:36.308620    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:36.689757    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:36.719437    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:36.801561    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:37.360182    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:37.360437    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:37.364895    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:37.685588    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:37.724674    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:37.803596    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:38.201283    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:38.270935    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:38.305156    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:38.680788    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:38.726391    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:38.808925    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:39.197173    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:39.220873    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:39.305292    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:39.681281    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:39.733930    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:39.794499    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:40.184518    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:40.233394    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:40.299402    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:40.681018    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:40.725918    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:40.793056    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:41.192624    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:41.220598    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:41.312176    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:41.684099    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:41.727472    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:41.795922    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:42.265801    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:42.269514    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:42.296725    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:42.711281    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:42.732422    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:42.805723    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:43.191566    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:43.228428    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:43.304679    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:43.680089    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:43.725683    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:43.813483    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:44.188291    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:44.247262    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:44.333927    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:44.680351    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:44.725723    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:44.806870    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:45.194833    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:45.233306    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:45.300009    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:45.690345    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:45.733597    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:45.807490    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:46.185336    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:46.239345    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:46.314052    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:46.693348    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:46.723187    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:46.807195    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:47.183259    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:47.229191    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:47.294431    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:47.689133    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:47.719160    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:47.804255    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:48.181474    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:48.229475    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:48.295322    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:48.678004    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:48.723787    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:48.807512    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:49.191366    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:49.221818    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:49.303021    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:49.680472    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:49.727425    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:49.808236    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:50.185883    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:50.231213    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:50.298160    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:50.765221    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:50.765783    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:50.808909    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:51.189583    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:51.235104    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:51.298542    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:51.691477    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:51.721100    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:51.804404    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:52.184391    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:52.229739    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:52.296686    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:52.693875    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:52.722372    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:52.805855    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:53.185483    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:53.230354    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:53.297279    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:53.691467    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:53.721783    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:53.805440    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:54.185019    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:54.231080    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:54.297986    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:54.963176    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:54.964119    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:54.964354    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:55.182043    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:55.232254    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:55.315076    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:56.316476    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:56.316893    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:56.316893    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:56.321130    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:56.327327    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:56.327534    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:56.818563    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:56.818563    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:56.821567    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:57.193850    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:57.227425    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:57.308081    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:57.686267    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:57.734877    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:57.797988    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:58.190711    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:58.223511    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:58.304117    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:58.680953    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:58.726542    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:58.810532    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:59.389128    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:59.390109    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:59.391059    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:46:59.679241    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:46:59.725131    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:46:59.807839    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:00.186601    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:00.232731    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:00.297333    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:00.693899    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:00.721646    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:00.805994    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:01.183828    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:01.229652    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:01.295425    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:01.691388    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:01.720136    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:01.800634    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:02.180879    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:02.225566    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:02.308085    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:02.686727    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:02.731921    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:02.796904    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:03.192714    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:03.222874    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:03.306962    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:03.682564    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:03.727210    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:03.795391    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:04.188422    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:04.235408    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:04.297511    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:04.691555    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:04.720895    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:04.801990    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:05.182484    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:05.228789    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:05.297977    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:06.126325    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:06.129602    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:06.129602    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:06.186117    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:06.563679    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:06.570967    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:06.686661    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:06.732452    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:06.802840    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:07.190235    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:07.226166    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:07.302628    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:07.692694    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:07.728262    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:07.820608    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:08.183659    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:08.233488    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:08.297946    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:08.680717    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:08.725113    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:08.807842    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:09.184794    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:09.231751    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:09.303675    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:09.772299    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:09.772351    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:09.804716    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:10.187567    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:10.234579    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:10.299862    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:10.687803    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:10.734044    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:10.829868    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:11.185662    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:11.224656    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:11.307597    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:11.687825    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:11.719439    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:11.801353    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:12.182587    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:12.227278    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:12.309319    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:12.689174    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:12.732693    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:12.800225    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:13.193051    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:13.223083    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:13.306981    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:13.682152    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:13.728508    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:14.120084    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:14.187357    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:14.232872    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:14.298701    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:14.692632    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:14.720911    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:14.805876    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:15.184478    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:15.230769    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:15.298785    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:15.692964    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:15.721024    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:15.807094    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:16.181741    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:16.228078    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:16.298669    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:16.692459    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:16.722279    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:16.805529    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:17.184856    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:17.232050    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:17.303578    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:17.679327    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:17.725320    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:17.807865    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:18.185299    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:18.232802    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:18.300296    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:18.689644    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:18.720833    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:18.801494    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:19.187196    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:19.229428    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:19.309093    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:19.685087    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:19.730870    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:19.796850    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:20.190372    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:20.219983    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:20.303300    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:20.683657    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:20.731207    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:20.799558    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:21.183866    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:21.229554    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:21.467079    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:21.684243    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:21.733463    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:21.796076    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:22.187384    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:22.232136    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:22.298076    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:22.696621    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:22.724159    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:22.808725    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:23.184713    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:23.229275    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:23.295800    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:23.691309    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:23.721989    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:23.804840    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:24.185410    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:24.230722    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:24.297259    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:24.690366    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:24.721861    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:24.808232    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:25.182836    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:25.231173    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:25.313482    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:25.686174    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:25.730699    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:25.816330    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:26.189909    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:26.220127    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:26.301767    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:26.680439    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:26.726662    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:26.810278    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:27.188546    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:27.219857    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:27.304083    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:27.678440    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:27.726267    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:28.215423    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:28.215939    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:28.224027    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:28.308337    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:28.686940    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:28.730686    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:28.800775    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:29.187586    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:29.231191    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:29.337003    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:29.690402    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:29.723943    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:29.807231    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:30.182823    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:30.227234    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:30.310501    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:30.690238    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:30.719344    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:30.802861    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:31.193115    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:31.826162    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:31.828151    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:31.830040    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:31.836622    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:31.855123    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:32.183154    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:32.232086    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:32.303830    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:32.690596    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:32.719751    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:32.803303    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:33.189452    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:33.221548    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:33.305818    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:33.680384    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:33.735680    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:33.797170    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:34.188600    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:34.219440    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:34.306203    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:34.681987    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:34.727981    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:34.793166    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:35.191660    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:35.223796    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:35.306636    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:35.688450    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:35.739865    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:35.801302    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:36.180126    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:36.225982    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:36.310177    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:36.685565    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:36.928921    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:36.930777    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:37.191417    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:37.221299    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:37.303361    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:37.680470    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:37.726193    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:37.806605    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:38.182788    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:38.230411    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:38.295596    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:38.690807    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:38.720496    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:38.803652    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:39.180350    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:39.225690    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:39.308660    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:39.686429    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:39.731384    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:39.800167    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:40.191906    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:40.222426    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:40.567383    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:40.690432    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:40.721060    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:40.803582    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:41.207221    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:41.234399    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:41.303758    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:41.688285    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:41.733994    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:41.799689    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:42.193316    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:42.222708    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:42.307652    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:42.785631    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:42.791077    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:42.801691    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:43.193144    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:43.222694    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:43.305610    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:43.682642    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:43.729640    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:43.797916    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:44.193095    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:44.223505    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:44.306451    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:44.687108    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:44.731905    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:44.798005    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:45.262237    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:45.265225    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:45.307415    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:45.686648    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:45.730405    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:45.797887    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:46.193051    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:46.222331    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:46.305728    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:46.683118    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:46.728503    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:46.811636    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:47.186341    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:47.236116    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:47.305566    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:47.684139    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:47.730091    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:47.812779    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:48.188629    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:48.221297    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:48.303628    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:48.679000    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:48.724340    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:48.810091    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:49.186640    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:49.232411    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:49.298012    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:49.682821    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:49.727422    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:49.809118    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:50.190021    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:50.222552    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:50.303508    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:50.683026    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:50.733305    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:50.797129    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:51.276306    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:51.276306    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:51.298683    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:51.695309    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:51.723637    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:52.061685    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:52.184326    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:52.228319    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:52.311633    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:52.685962    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:52.734917    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:52.800004    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:53.181803    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:53.226386    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:53.294362    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:53.702359    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:53.721386    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:53.804497    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:54.185106    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:54.232325    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:54.299657    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:54.681735    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:54.728744    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:54.808399    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:55.189142    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:55.233225    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:55.302249    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:55.692773    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:55.722447    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:55.813017    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:56.182543    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:56.420191    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:56.423453    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:56.686835    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:56.731761    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:56.798537    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:57.187269    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:57.234377    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:57.297022    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:57.886426    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:57.887596    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:58.028342    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:58.197817    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:58.221440    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:58.306139    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:58.683347    5088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:47:58.727076    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:58.809390    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:59.189908    5088 kapi.go:107] duration metric: took 2m35.5189483s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 18:47:59.233507    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:59.306577    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:47:59.727065    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:47:59.810854    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:00.235808    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:00.299189    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:00.725356    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:00.807278    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:01.233803    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:01.298761    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:01.971738    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:01.973759    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:02.230137    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:02.311766    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:02.720348    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:02.804815    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:03.226610    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:03.310200    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:03.741041    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:03.806709    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:04.225344    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:04.307955    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:04.733034    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:04.799707    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:05.679871    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:05.689424    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:05.736425    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:05.825336    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:06.246250    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:48:06.308617    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:06.728844    5088 kapi.go:107] duration metric: took 2m39.0146605s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 18:48:06.731846    5088 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-442400 cluster.
	I0429 18:48:06.734952    5088 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 18:48:06.738319    5088 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 18:48:06.808887    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:07.315260    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:07.806562    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:08.294982    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:08.807926    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:09.303995    5088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:48:09.807048    5088 kapi.go:107] duration metric: took 2m45.020949s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 18:48:09.810813    5088 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, nvidia-device-plugin, ingress-dns, helm-tiller, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0429 18:48:09.814965    5088 addons.go:505] duration metric: took 3m20.7930113s for enable addons: enabled=[storage-provisioner cloud-spanner nvidia-device-plugin ingress-dns helm-tiller inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0429 18:48:09.814965    5088 start.go:245] waiting for cluster config update ...
	I0429 18:48:09.814965    5088 start.go:254] writing updated cluster config ...
	I0429 18:48:09.829739    5088 ssh_runner.go:195] Run: rm -f paused
	I0429 18:48:10.099748    5088 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 18:48:10.103465    5088 out.go:177] * Done! kubectl is now configured to use "addons-442400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 29 18:48:45 addons-442400 dockerd[1345]: time="2024-04-29T18:48:45.065171986Z" level=info msg="shim disconnected" id=8d820ec8e4de056084e5fdd7269dce863bba7a3bface22b59ef1de4829fd6258 namespace=moby
	Apr 29 18:48:45 addons-442400 dockerd[1345]: time="2024-04-29T18:48:45.070384818Z" level=warning msg="cleaning up after shim disconnected" id=8d820ec8e4de056084e5fdd7269dce863bba7a3bface22b59ef1de4829fd6258 namespace=moby
	Apr 29 18:48:45 addons-442400 dockerd[1345]: time="2024-04-29T18:48:45.070526119Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 18:48:45 addons-442400 dockerd[1345]: time="2024-04-29T18:48:45.317035756Z" level=info msg="shim disconnected" id=4aa8d9db23f4ef2f48afb38eb54eb6e8b7c86fdab24c50d09dc80be991e5cecd namespace=moby
	Apr 29 18:48:45 addons-442400 dockerd[1345]: time="2024-04-29T18:48:45.317120456Z" level=warning msg="cleaning up after shim disconnected" id=4aa8d9db23f4ef2f48afb38eb54eb6e8b7c86fdab24c50d09dc80be991e5cecd namespace=moby
	Apr 29 18:48:45 addons-442400 dockerd[1345]: time="2024-04-29T18:48:45.317134357Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 18:48:45 addons-442400 dockerd[1338]: time="2024-04-29T18:48:45.325135106Z" level=info msg="ignoring event" container=4aa8d9db23f4ef2f48afb38eb54eb6e8b7c86fdab24c50d09dc80be991e5cecd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 18:48:46 addons-442400 dockerd[1345]: time="2024-04-29T18:48:46.343867597Z" level=info msg="shim disconnected" id=be117b3f8c95657be3dc592a258cbdaddb5e06145890dd64e07ca8e626aa4dd3 namespace=moby
	Apr 29 18:48:46 addons-442400 dockerd[1345]: time="2024-04-29T18:48:46.343955998Z" level=warning msg="cleaning up after shim disconnected" id=be117b3f8c95657be3dc592a258cbdaddb5e06145890dd64e07ca8e626aa4dd3 namespace=moby
	Apr 29 18:48:46 addons-442400 dockerd[1345]: time="2024-04-29T18:48:46.343971898Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 18:48:46 addons-442400 dockerd[1338]: time="2024-04-29T18:48:46.372675138Z" level=info msg="ignoring event" container=be117b3f8c95657be3dc592a258cbdaddb5e06145890dd64e07ca8e626aa4dd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 18:48:53 addons-442400 cri-dockerd[1243]: time="2024-04-29T18:48:53Z" level=info msg="Pulling image docker.io/nginx:latest: 8ddb1e6cdf34: Extracting [===>                                               ]  2.556MB/41.82MB"
	Apr 29 18:48:58 addons-442400 cri-dockerd[1243]: time="2024-04-29T18:48:58Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Downloaded newer image for nginx:latest"
	Apr 29 18:49:00 addons-442400 dockerd[1345]: time="2024-04-29T18:49:00.601451359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 18:49:00 addons-442400 dockerd[1345]: time="2024-04-29T18:49:00.602298963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 18:49:00 addons-442400 dockerd[1345]: time="2024-04-29T18:49:00.602545664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 18:49:00 addons-442400 dockerd[1345]: time="2024-04-29T18:49:00.604167072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 18:49:03 addons-442400 dockerd[1345]: time="2024-04-29T18:49:03.126801153Z" level=info msg="shim disconnected" id=d676f03050be6c8139647deaa2750da9b206a2805dc83376b69358d4cc6f6889 namespace=moby
	Apr 29 18:49:03 addons-442400 dockerd[1338]: time="2024-04-29T18:49:03.127480257Z" level=info msg="ignoring event" container=d676f03050be6c8139647deaa2750da9b206a2805dc83376b69358d4cc6f6889 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 18:49:03 addons-442400 dockerd[1345]: time="2024-04-29T18:49:03.128990064Z" level=warning msg="cleaning up after shim disconnected" id=d676f03050be6c8139647deaa2750da9b206a2805dc83376b69358d4cc6f6889 namespace=moby
	Apr 29 18:49:03 addons-442400 dockerd[1345]: time="2024-04-29T18:49:03.129078064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 18:49:03 addons-442400 dockerd[1338]: time="2024-04-29T18:49:03.422959558Z" level=info msg="ignoring event" container=0fd622375a825b533e94313fd3a5bab1c853a7c8130e73836fb942875f76e978 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 18:49:03 addons-442400 dockerd[1345]: time="2024-04-29T18:49:03.424954067Z" level=info msg="shim disconnected" id=0fd622375a825b533e94313fd3a5bab1c853a7c8130e73836fb942875f76e978 namespace=moby
	Apr 29 18:49:03 addons-442400 dockerd[1345]: time="2024-04-29T18:49:03.425109568Z" level=warning msg="cleaning up after shim disconnected" id=0fd622375a825b533e94313fd3a5bab1c853a7c8130e73836fb942875f76e978 namespace=moby
	Apr 29 18:49:03 addons-442400 dockerd[1345]: time="2024-04-29T18:49:03.425129368Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	8a8a88fea160f       nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee                                                                8 seconds ago        Running             task-pv-container                        0                   2cdda6da64846       task-pv-pod
	c15966dc324e0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:abef4926f3e6f0aa50c968aa954f990a6b0178e04a955293a49d96810c43d0e1                            24 seconds ago       Exited              gadget                                   3                   4242a04ad766a       gadget-fxfn2
	2f2b3eb38f247       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                          24 seconds ago       Exited              helm-test                                0                   be117b3f8c956       helm-test
	d0c200de20505       a416a98b71e22                                                                                                                                31 seconds ago       Exited              helper-pod                               0                   4e39f76884502       helper-pod-delete-pvc-615aeca5-4422-4969-87de-5534dc276d28
	00f81704be891       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          58 seconds ago       Running             csi-snapshotter                          0                   f9181bbd6630d       csi-hostpathplugin-p67t7
	566c9d5b57da0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   54b566c18a477       gcp-auth-5db96cd9b4-qk92k
	3cf9db0b67e0d       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   f9181bbd6630d       csi-hostpathplugin-p67t7
	7ec222b11cd4e       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   978747ff35902       ingress-nginx-controller-768f948f8f-wv72k
	4e7f936f88ea1       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   f9181bbd6630d       csi-hostpathplugin-p67t7
	602aec67175d6       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   f9181bbd6630d       csi-hostpathplugin-p67t7
	ea88cc490fb17       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   f9181bbd6630d       csi-hostpathplugin-p67t7
	ad5cba2421e71       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   f7751088295a3       csi-hostpath-resizer-0
	1c141b779df24       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   82187319f623a       csi-hostpath-attacher-0
	b9dc74c1407d8       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   f9181bbd6630d       csi-hostpathplugin-p67t7
	5f591be15f53d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   About a minute ago   Exited              patch                                    0                   59f69235105f1       ingress-nginx-admission-patch-mkg6k
	322e5ddb7697a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   About a minute ago   Exited              create                                   0                   a9346d2632882       ingress-nginx-admission-create-w8nwn
	9ce0318888d20       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   022583fbc109a       snapshot-controller-745499f584-w6fjs
	15008338a39de       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   f59dc354c4a8b       snapshot-controller-745499f584-7hj2b
	4c5cd4195f28d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   fd003d7ad5303       local-path-provisioner-8d985888d-f8b7b
	becac4f95867a       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   ffc2a6bca8c24       yakd-dashboard-5ddbf7d777-79778
	6a7eb9f3eddfd       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   3daeba4bad58e       kube-ingress-dns-minikube
	df3481ace555b       gcr.io/cloud-spanner-emulator/emulator@sha256:1680486ec721ba559ec0fe6f876c44cf784d77e6a82cf89874e9aff90de2ebd5                               3 minutes ago        Running             cloud-spanner-emulator                   0                   4c409301f3f9d       cloud-spanner-emulator-6dc8d859f6-wxjzz
	7abc607c4be82       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   ee45fc3f20dfb       nvidia-device-plugin-daemonset-fhh92
	07b30da5b7c08       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   174349b92af40       storage-provisioner
	7a1b4d457054a       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   5c04061fcce9b       coredns-7db6d8ff4d-sg4xf
	1a9ed61c6b290       a0bf559e280cf                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   166907fbbe5e6       kube-proxy-f8rwp
	0c712cd11b6c4       3861cfcd7c04c                                                                                                                                4 minutes ago        Running             etcd                                     0                   988726333d243       etcd-addons-442400
	3d7d2b584fe65       c7aad43836fa5                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   4cce29065a3d8       kube-controller-manager-addons-442400
	69857aaae8077       259c8277fcbbc                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   b182df88815cb       kube-scheduler-addons-442400
	899f6d2cc1e84       c42f13656d0b2                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   c551053f90c0e       kube-apiserver-addons-442400
	
	
	==> controller_ingress [7ec222b11cd4] <==
	W0429 18:47:58.320708       8 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0429 18:47:58.321541       8 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0429 18:47:58.329233       8 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.0" state="clean" commit="7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a" platform="linux/amd64"
	I0429 18:47:58.592601       8 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0429 18:47:58.634929       8 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0429 18:47:58.653607       8 nginx.go:264] "Starting NGINX Ingress controller"
	I0429 18:47:58.680132       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9e736d11-140b-4ce6-a2f5-c926f67d4f1e", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0429 18:47:58.684500       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"d48ab9b6-b614-4d15-9bf6-31cf8eef9a87", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0429 18:47:58.684662       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"79357da4-7acd-4324-b35f-d1ff89b23ccc", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0429 18:47:59.861878       8 nginx.go:307] "Starting NGINX process"
	I0429 18:47:59.862277       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0429 18:47:59.863145       8 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0429 18:47:59.865731       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0429 18:47:59.894080       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0429 18:47:59.894598       8 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-wv72k"
	I0429 18:47:59.902423       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-wv72k" node="addons-442400"
	I0429 18:47:59.996273       8 controller.go:210] "Backend successfully reloaded"
	I0429 18:47:59.996350       8 controller.go:221] "Initial sync, sleeping for 1 second"
	I0429 18:47:59.996700       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-wv72k", UID:"0c706aa0-a679-4231-9c32-5e29aa4a4905", APIVersion:"v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [7a1b4d457054] <==
	[INFO] 10.244.0.7:43906 - 10594 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000661402s
	[INFO] 10.244.0.7:44405 - 30391 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001707s
	[INFO] 10.244.0.7:44405 - 21940 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000255501s
	[INFO] 10.244.0.7:33858 - 26045 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0002138s
	[INFO] 10.244.0.7:33858 - 947 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000399001s
	[INFO] 10.244.0.7:40563 - 31644 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000187701s
	[INFO] 10.244.0.7:40563 - 34971 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000197801s
	[INFO] 10.244.0.7:47531 - 62043 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000795s
	[INFO] 10.244.0.7:47531 - 47432 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000526s
	[INFO] 10.244.0.7:35548 - 37226 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001184s
	[INFO] 10.244.0.7:35548 - 22121 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000526s
	[INFO] 10.244.0.7:55754 - 59990 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107901s
	[INFO] 10.244.0.7:55754 - 33872 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001518s
	[INFO] 10.244.0.7:58308 - 55515 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000172801s
	[INFO] 10.244.0.7:58308 - 14300 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001493s
	[INFO] 10.244.0.22:43284 - 10641 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000450201s
	[INFO] 10.244.0.22:49362 - 46616 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00011s
	[INFO] 10.244.0.22:49392 - 8753 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001058s
	[INFO] 10.244.0.22:42757 - 1573 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000071801s
	[INFO] 10.244.0.22:37446 - 40999 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128s
	[INFO] 10.244.0.22:44740 - 64795 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131301s
	[INFO] 10.244.0.22:43471 - 10013 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.001350703s
	[INFO] 10.244.0.22:49606 - 38891 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002257304s
	[INFO] 10.244.0.25:33310 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000300601s
	[INFO] 10.244.0.25:53656 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000205901s
	
	
	==> describe nodes <==
	Name:               addons-442400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-442400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=addons-442400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T18_44_35_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-442400
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-442400"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 18:44:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-442400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 18:49:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 18:48:40 +0000   Mon, 29 Apr 2024 18:44:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 18:48:40 +0000   Mon, 29 Apr 2024 18:44:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 18:48:40 +0000   Mon, 29 Apr 2024 18:44:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 18:48:40 +0000   Mon, 29 Apr 2024 18:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.248.23
	  Hostname:    addons-442400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912864Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912864Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb39d45281534b0dbe3c18b3099504f3
	  System UUID:                2bef1915-44b0-6345-ad4b-bf45ea378c14
	  Boot ID:                    5094faae-5165-4dd4-a1b0-87a770eb2285
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6dc8d859f6-wxjzz      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  gadget                      gadget-fxfn2                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  gcp-auth                    gcp-auth-5db96cd9b4-qk92k                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-wv72k    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m43s
	  kube-system                 coredns-7db6d8ff4d-sg4xf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m15s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 csi-hostpathplugin-p67t7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 etcd-addons-442400                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-apiserver-addons-442400                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-controller-manager-addons-442400        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-proxy-f8rwp                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-scheduler-addons-442400                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 nvidia-device-plugin-daemonset-fhh92         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 snapshot-controller-745499f584-7hj2b         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 snapshot-controller-745499f584-w6fjs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  local-path-storage          local-path-provisioner-8d985888d-f8b7b       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-79778              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m41s (x8 over 4m41s)  kubelet          Node addons-442400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s (x8 over 4m41s)  kubelet          Node addons-442400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s (x7 over 4m41s)  kubelet          Node addons-442400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m32s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m32s                  kubelet          Node addons-442400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s                  kubelet          Node addons-442400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s                  kubelet          Node addons-442400 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m27s                  kubelet          Node addons-442400 status is now: NodeReady
	  Normal  RegisteredNode           4m18s                  node-controller  Node addons-442400 event: Registered Node addons-442400 in Controller
	
	
	==> dmesg <==
	[  +4.862676] hrtimer: interrupt took 4077520 ns
	[  +0.834552] kauditd_printk_skb: 17 callbacks suppressed
	[Apr29 18:45] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.440915] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.486565] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.468411] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.151607] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.038064] kauditd_printk_skb: 109 callbacks suppressed
	[ +13.850640] kauditd_printk_skb: 11 callbacks suppressed
	[Apr29 18:46] kauditd_printk_skb: 4 callbacks suppressed
	[Apr29 18:47] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.370816] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.920801] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.078474] kauditd_printk_skb: 41 callbacks suppressed
	[  +7.112851] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.251802] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.168643] kauditd_printk_skb: 2 callbacks suppressed
	[Apr29 18:48] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.702904] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.962014] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.535699] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.551076] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.141644] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.113261] kauditd_printk_skb: 15 callbacks suppressed
	[Apr29 18:49] kauditd_printk_skb: 61 callbacks suppressed
	
	
	==> etcd [0c712cd11b6c] <==
	{"level":"warn","ts":"2024-04-29T18:48:41.768748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.839777ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-04-29T18:48:41.76878Z","caller":"traceutil/trace.go:171","msg":"trace[309756841] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1467; }","duration":"268.901377ms","start":"2024-04-29T18:48:41.499871Z","end":"2024-04-29T18:48:41.768772Z","steps":["trace[309756841] 'range keys from in-memory index tree'  (duration: 268.779177ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:48:47.219602Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.26276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-04-29T18:48:47.219665Z","caller":"traceutil/trace.go:171","msg":"trace[2089338484] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1497; }","duration":"157.359861ms","start":"2024-04-29T18:48:47.062289Z","end":"2024-04-29T18:48:47.219649Z","steps":["trace[2089338484] 'range keys from in-memory index tree'  (duration: 157.10076ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:48:47.219662Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"356.900034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-fxfn2\" ","response":"range_response_count:1 size:11380"}
	{"level":"info","ts":"2024-04-29T18:48:47.219715Z","caller":"traceutil/trace.go:171","msg":"trace[26372180] range","detail":"{range_begin:/registry/pods/gadget/gadget-fxfn2; range_end:; response_count:1; response_revision:1497; }","duration":"356.981334ms","start":"2024-04-29T18:48:46.862724Z","end":"2024-04-29T18:48:47.219705Z","steps":["trace[26372180] 'range keys from in-memory index tree'  (duration: 356.796033ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:48:47.21974Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:48:46.862711Z","time spent":"357.021834ms","remote":"127.0.0.1:45150","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":11402,"request content":"key:\"/registry/pods/gadget/gadget-fxfn2\" "}
	{"level":"info","ts":"2024-04-29T18:48:59.416825Z","caller":"traceutil/trace.go:171","msg":"trace[950915640] linearizableReadLoop","detail":"{readStateIndex:1595; appliedIndex:1594; }","duration":"156.517645ms","start":"2024-04-29T18:48:59.26029Z","end":"2024-04-29T18:48:59.416807Z","steps":["trace[950915640] 'read index received'  (duration: 156.275844ms)","trace[950915640] 'applied index is now lower than readState.Index'  (duration: 241.201µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T18:48:59.417674Z","caller":"traceutil/trace.go:171","msg":"trace[823139341] transaction","detail":"{read_only:false; response_revision:1520; number_of_response:1; }","duration":"365.760342ms","start":"2024-04-29T18:48:59.0519Z","end":"2024-04-29T18:48:59.417661Z","steps":["trace[823139341] 'process raft request'  (duration: 364.605937ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:48:59.417764Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:48:59.051877Z","time spent":"365.824442ms","remote":"127.0.0.1:45232","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-hqxqnevgwvff25zzudepsl6sku\" mod_revision:1506 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-hqxqnevgwvff25zzudepsl6sku\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-hqxqnevgwvff25zzudepsl6sku\" > >"}
	{"level":"warn","ts":"2024-04-29T18:48:59.420769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.765651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6511"}
	{"level":"info","ts":"2024-04-29T18:48:59.421583Z","caller":"traceutil/trace.go:171","msg":"trace[1995476342] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1520; }","duration":"160.538764ms","start":"2024-04-29T18:48:59.260276Z","end":"2024-04-29T18:48:59.420815Z","steps":["trace[1995476342] 'agreement among raft nodes before linearized reading'  (duration: 157.691351ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:48:59.837722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.552613ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1511396565547386026 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1518 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T18:48:59.838326Z","caller":"traceutil/trace.go:171","msg":"trace[1361636559] linearizableReadLoop","detail":"{readStateIndex:1596; appliedIndex:1595; }","duration":"199.929952ms","start":"2024-04-29T18:48:59.638383Z","end":"2024-04-29T18:48:59.838313Z","steps":["trace[1361636559] 'read index received'  (duration: 91.592636ms)","trace[1361636559] 'applied index is now lower than readState.Index'  (duration: 108.335916ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T18:48:59.838606Z","caller":"traceutil/trace.go:171","msg":"trace[583404811] transaction","detail":"{read_only:false; response_revision:1521; number_of_response:1; }","duration":"410.497355ms","start":"2024-04-29T18:48:59.428094Z","end":"2024-04-29T18:48:59.838591Z","steps":["trace[583404811] 'process raft request'  (duration: 301.927338ms)","trace[583404811] 'compare'  (duration: 107.400012ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T18:48:59.83868Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:48:59.428075Z","time spent":"410.565856ms","remote":"127.0.0.1:45128","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1518 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-29T18:48:59.838955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.222153ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T18:48:59.839145Z","caller":"traceutil/trace.go:171","msg":"trace[1017590673] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1521; }","duration":"200.785856ms","start":"2024-04-29T18:48:59.638348Z","end":"2024-04-29T18:48:59.839134Z","steps":["trace[1017590673] 'agreement among raft nodes before linearized reading'  (duration: 200.220253ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:49:00.386385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.067513ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-04-29T18:49:00.386466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"324.871146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:1 size:501"}
	{"level":"info","ts":"2024-04-29T18:49:00.386518Z","caller":"traceutil/trace.go:171","msg":"trace[1691991606] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:1; response_revision:1521; }","duration":"324.957046ms","start":"2024-04-29T18:49:00.061547Z","end":"2024-04-29T18:49:00.386505Z","steps":["trace[1691991606] 'range keys from in-memory index tree'  (duration: 324.703045ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:49:00.38665Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:49:00.061516Z","time spent":"325.118947ms","remote":"127.0.0.1:45232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":523,"request content":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" "}
	{"level":"warn","ts":"2024-04-29T18:49:00.386794Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.390606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6511"}
	{"level":"info","ts":"2024-04-29T18:49:00.386822Z","caller":"traceutil/trace.go:171","msg":"trace[547073514] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1521; }","duration":"127.448907ms","start":"2024-04-29T18:49:00.259366Z","end":"2024-04-29T18:49:00.386815Z","steps":["trace[547073514] 'range keys from in-memory index tree'  (duration: 126.902403ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T18:49:00.386481Z","caller":"traceutil/trace.go:171","msg":"trace[2002372234] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1521; }","duration":"171.171314ms","start":"2024-04-29T18:49:00.215294Z","end":"2024-04-29T18:49:00.386465Z","steps":["trace[2002372234] 'range keys from in-memory index tree'  (duration: 171.053613ms)"],"step_count":1}
	
	
	==> gcp-auth [566c9d5b57da] <==
	2024/04/29 18:48:05 GCP Auth Webhook started!
	2024/04/29 18:48:10 Ready to marshal response ...
	2024/04/29 18:48:10 Ready to write response ...
	2024/04/29 18:48:11 Ready to marshal response ...
	2024/04/29 18:48:11 Ready to write response ...
	2024/04/29 18:48:21 Ready to marshal response ...
	2024/04/29 18:48:21 Ready to write response ...
	2024/04/29 18:48:34 Ready to marshal response ...
	2024/04/29 18:48:34 Ready to write response ...
	2024/04/29 18:48:38 Ready to marshal response ...
	2024/04/29 18:48:38 Ready to write response ...
	2024/04/29 18:48:41 Ready to marshal response ...
	2024/04/29 18:48:41 Ready to write response ...
	
	
	==> kernel <==
	 18:49:07 up 6 min,  0 users,  load average: 2.12, 2.20, 1.07
	Linux addons-442400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [899f6d2cc1e8] <==
	E0429 18:46:58.341820       1 wrap.go:54] timeout or abort while handling: method=GET URI="/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)addons-442400&limit=500&resourceVersion=0" audit-ID="cc4e5a68-a28d-41c7-824a-8ba0d1ba889b"
	E0429 18:46:58.345509       1 writers.go:135] apiserver was unable to write a fallback JSON response: client disconnected
	E0429 18:46:58.346413       1 timeout.go:142] post-timeout activity - time-elapsed: 4.524917ms, GET "/api/v1/pods" result: <nil>
	I0429 18:47:06.137802       1 trace.go:236] Trace[1461724030]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:718a73e4-0f1b-4ba0-8e2a-6b45c559b75c,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:apiserver-hqxqnevgwvff25zzudepsl6sku,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-hqxqnevgwvff25zzudepsl6sku,user-agent:kube-apiserver/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (29-Apr-2024 18:47:05.534) (total time: 603ms):
	Trace[1461724030]: ["GuaranteedUpdate etcd3" audit-id:718a73e4-0f1b-4ba0-8e2a-6b45c559b75c,key:/leases/kube-system/apiserver-hqxqnevgwvff25zzudepsl6sku,type:*coordination.Lease,resource:leases.coordination.k8s.io 602ms (18:47:05.534)
	Trace[1461724030]:  ---"Txn call completed" 601ms (18:47:06.137)]
	Trace[1461724030]: [603.23868ms] [603.23868ms] END
	I0429 18:47:31.840884       1 trace.go:236] Trace[1232135872]: "List" accept:application/json, */*,audit-id:d73d176f-275a-45bc-a36d-1833fca318ef,client:172.17.240.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Apr-2024 18:47:31.313) (total time: 526ms):
	Trace[1232135872]: ["List(recursive=true) etcd3" audit-id:d73d176f-275a-45bc-a36d-1833fca318ef,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 526ms (18:47:31.313)]
	Trace[1232135872]: [526.546745ms] [526.546745ms] END
	I0429 18:47:31.843756       1 trace.go:236] Trace[1465601860]: "List" accept:application/json, */*,audit-id:c6a2286b-f6a1-4981-af0d-c8c5c742bc09,client:172.17.240.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Apr-2024 18:47:31.237) (total time: 606ms):
	Trace[1465601860]: ["List(recursive=true) etcd3" audit-id:c6a2286b-f6a1-4981-af0d-c8c5c742bc09,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 606ms (18:47:31.237)]
	Trace[1465601860]: [606.723372ms] [606.723372ms] END
	I0429 18:48:05.738802       1 trace.go:236] Trace[1808521067]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f5e27171-e236-4d9a-884f-88171c4ee5a9,client:172.17.248.23,api-group:,api-version:v1,name:gcp-auth-certs-patch-ztm9m,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/gcp-auth/pods/gcp-auth-certs-patch-ztm9m,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:generic-garbage-collector,verb:DELETE (29-Apr-2024 18:48:05.082) (total time: 656ms):
	Trace[1808521067]: ---"Object deleted from database" 24ms (18:48:05.738)
	Trace[1808521067]: [656.007063ms] [656.007063ms] END
	I0429 18:48:05.766545       1 trace.go:236] Trace[1900605494]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:599d239b-1cd6-4e9b-865d-0632a76d4573,client:172.17.248.23,api-group:,api-version:v1,name:gcp-auth-certs-create-sjxlv,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/gcp-auth/pods/gcp-auth-certs-create-sjxlv,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:generic-garbage-collector,verb:DELETE (29-Apr-2024 18:48:05.083) (total time: 682ms):
	Trace[1900605494]: ---"Object deleted from database" 34ms (18:48:05.762)
	Trace[1900605494]: [682.787618ms] [682.787618ms] END
	E0429 18:48:07.320338       1 wrap.go:54] timeout or abort while handling: method=GET URI="/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)addons-442400&limit=500&resourceVersion=0" audit-ID="54ff810c-bbb2-45e6-b111-125442e804da"
	E0429 18:48:07.320905       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 18:48:07.321617       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 18:48:07.323185       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 18:48:07.323639       1 timeout.go:142] post-timeout activity - time-elapsed: 3.235406ms, GET "/api/v1/pods" result: <nil>
	I0429 18:48:40.957448       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [3d7d2b584fe6] <==
	I0429 18:47:30.193657       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="98.7µs"
	I0429 18:47:32.948371       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 18:47:32.967962       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 18:47:34.390278       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 18:47:34.411626       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 18:47:35.132895       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 18:47:35.165717       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 18:47:35.404286       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 18:47:35.421013       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 18:47:35.427115       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 18:47:35.433101       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 18:47:35.443155       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 18:47:35.456770       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 18:47:59.100830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="78.401µs"
	I0429 18:48:05.043028       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 18:48:05.050836       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 18:48:05.825650       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 18:48:05.842081       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 18:48:06.504177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="52.830509ms"
	I0429 18:48:06.507035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="103.3µs"
	I0429 18:48:10.711279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="39.008881ms"
	I0429 18:48:10.712048       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="435.5µs"
	I0429 18:48:32.519556       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="6.1µs"
	I0429 18:48:44.424947       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="8.6µs"
	I0429 18:49:03.032607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="5.2µs"
	
	
	==> kube-proxy [1a9ed61c6b29] <==
	I0429 18:44:59.866859       1 server_linux.go:69] "Using iptables proxy"
	I0429 18:44:59.964483       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.248.23"]
	I0429 18:45:00.207540       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 18:45:00.207635       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 18:45:00.207670       1 server_linux.go:165] "Using iptables Proxier"
	I0429 18:45:00.235333       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 18:45:00.235729       1 server.go:872] "Version info" version="v1.30.0"
	I0429 18:45:00.235753       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 18:45:00.238558       1 config.go:192] "Starting service config controller"
	I0429 18:45:00.238616       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 18:45:00.238659       1 config.go:101] "Starting endpoint slice config controller"
	I0429 18:45:00.238668       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 18:45:00.239996       1 config.go:319] "Starting node config controller"
	I0429 18:45:00.240018       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 18:45:00.341641       1 shared_informer.go:320] Caches are synced for node config
	I0429 18:45:00.342137       1 shared_informer.go:320] Caches are synced for service config
	I0429 18:45:00.342805       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [69857aaae807] <==
	W0429 18:44:32.074405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 18:44:32.074750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 18:44:32.076510       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 18:44:32.077930       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 18:44:32.133773       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 18:44:32.133984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 18:44:32.162904       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 18:44:32.163096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 18:44:32.184725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 18:44:32.185287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 18:44:32.191986       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 18:44:32.192558       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 18:44:32.196387       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 18:44:32.196898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 18:44:32.205333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 18:44:32.205497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 18:44:32.243911       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 18:44:32.244181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 18:44:32.603444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 18:44:32.604006       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 18:44:32.606424       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 18:44:32.606583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 18:44:32.677017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 18:44:32.677323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0429 18:44:34.960359       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 18:48:45 addons-442400 kubelet[2128]: I0429 18:48:45.819391    2128 scope.go:117] "RemoveContainer" containerID="41a520346a8bcb246b777497b5be43a2b8d3f4c8f96cc06d1267fc4028ca8849"
	Apr 29 18:48:45 addons-442400 kubelet[2128]: I0429 18:48:45.867921    2128 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjcdf\" (UniqueName: \"kubernetes.io/projected/25144203-305f-40fb-9c65-8c59773521bc-kube-api-access-wjcdf\") pod \"25144203-305f-40fb-9c65-8c59773521bc\" (UID: \"25144203-305f-40fb-9c65-8c59773521bc\") "
	Apr 29 18:48:45 addons-442400 kubelet[2128]: I0429 18:48:45.876049    2128 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25144203-305f-40fb-9c65-8c59773521bc-kube-api-access-wjcdf" (OuterVolumeSpecName: "kube-api-access-wjcdf") pod "25144203-305f-40fb-9c65-8c59773521bc" (UID: "25144203-305f-40fb-9c65-8c59773521bc"). InnerVolumeSpecName "kube-api-access-wjcdf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 18:48:45 addons-442400 kubelet[2128]: I0429 18:48:45.969708    2128 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wjcdf\" (UniqueName: \"kubernetes.io/projected/25144203-305f-40fb-9c65-8c59773521bc-kube-api-access-wjcdf\") on node \"addons-442400\" DevicePath \"\""
	Apr 29 18:48:46 addons-442400 kubelet[2128]: I0429 18:48:46.675115    2128 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b2dc4f-95c0-4239-b5fb-3b21d4a53327" path="/var/lib/kubelet/pods/75b2dc4f-95c0-4239-b5fb-3b21d4a53327/volumes"
	Apr 29 18:48:46 addons-442400 kubelet[2128]: I0429 18:48:46.860003    2128 scope.go:117] "RemoveContainer" containerID="c15966dc324e0d5a74eeca85db564d043713928bb6f0572969aecbfd612cf63d"
	Apr 29 18:48:46 addons-442400 kubelet[2128]: E0429 18:48:46.860799    2128 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-fxfn2_gadget(a97dcda8-caec-41ca-b8a2-1406403f9fa1)\"" pod="gadget/gadget-fxfn2" podUID="a97dcda8-caec-41ca-b8a2-1406403f9fa1"
	Apr 29 18:48:46 addons-442400 kubelet[2128]: I0429 18:48:46.878149    2128 scope.go:117] "RemoveContainer" containerID="b9ac6a0be40a6f6bf14f98ea8f2fa25dbe0a855abfeebcc6f30321bd0afbfa70"
	Apr 29 18:48:47 addons-442400 kubelet[2128]: I0429 18:48:47.403624    2128 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjmhr\" (UniqueName: \"kubernetes.io/projected/d4385153-0992-4c67-8e2b-27fdd3a8b6b6-kube-api-access-fjmhr\") pod \"d4385153-0992-4c67-8e2b-27fdd3a8b6b6\" (UID: \"d4385153-0992-4c67-8e2b-27fdd3a8b6b6\") "
	Apr 29 18:48:47 addons-442400 kubelet[2128]: I0429 18:48:47.417148    2128 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4385153-0992-4c67-8e2b-27fdd3a8b6b6-kube-api-access-fjmhr" (OuterVolumeSpecName: "kube-api-access-fjmhr") pod "d4385153-0992-4c67-8e2b-27fdd3a8b6b6" (UID: "d4385153-0992-4c67-8e2b-27fdd3a8b6b6"). InnerVolumeSpecName "kube-api-access-fjmhr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 18:48:47 addons-442400 kubelet[2128]: I0429 18:48:47.505184    2128 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fjmhr\" (UniqueName: \"kubernetes.io/projected/d4385153-0992-4c67-8e2b-27fdd3a8b6b6-kube-api-access-fjmhr\") on node \"addons-442400\" DevicePath \"\""
	Apr 29 18:48:47 addons-442400 kubelet[2128]: I0429 18:48:47.953624    2128 scope.go:117] "RemoveContainer" containerID="c15966dc324e0d5a74eeca85db564d043713928bb6f0572969aecbfd612cf63d"
	Apr 29 18:48:47 addons-442400 kubelet[2128]: E0429 18:48:47.954067    2128 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-fxfn2_gadget(a97dcda8-caec-41ca-b8a2-1406403f9fa1)\"" pod="gadget/gadget-fxfn2" podUID="a97dcda8-caec-41ca-b8a2-1406403f9fa1"
	Apr 29 18:48:47 addons-442400 kubelet[2128]: I0429 18:48:47.954113    2128 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be117b3f8c95657be3dc592a258cbdaddb5e06145890dd64e07ca8e626aa4dd3"
	Apr 29 18:48:48 addons-442400 kubelet[2128]: I0429 18:48:48.644009    2128 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25144203-305f-40fb-9c65-8c59773521bc" path="/var/lib/kubelet/pods/25144203-305f-40fb-9c65-8c59773521bc/volumes"
	Apr 29 18:48:48 addons-442400 kubelet[2128]: I0429 18:48:48.645005    2128 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4385153-0992-4c67-8e2b-27fdd3a8b6b6" path="/var/lib/kubelet/pods/d4385153-0992-4c67-8e2b-27fdd3a8b6b6/volumes"
	Apr 29 18:49:03 addons-442400 kubelet[2128]: I0429 18:49:03.055108    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod" podStartSLOduration=6.091122716 podStartE2EDuration="22.055072113s" podCreationTimestamp="2024-04-29 18:48:41 +0000 UTC" firstStartedPulling="2024-04-29 18:48:42.833471571 +0000 UTC m=+248.451332953" lastFinishedPulling="2024-04-29 18:48:58.797420968 +0000 UTC m=+264.415282350" observedRunningTime="2024-04-29 18:49:01.437684435 +0000 UTC m=+267.055545917" watchObservedRunningTime="2024-04-29 18:49:03.055072113 +0000 UTC m=+268.672933595"
	Apr 29 18:49:03 addons-442400 kubelet[2128]: I0429 18:49:03.594562    2128 scope.go:117] "RemoveContainer" containerID="c15966dc324e0d5a74eeca85db564d043713928bb6f0572969aecbfd612cf63d"
	Apr 29 18:49:03 addons-442400 kubelet[2128]: E0429 18:49:03.595107    2128 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-fxfn2_gadget(a97dcda8-caec-41ca-b8a2-1406403f9fa1)\"" pod="gadget/gadget-fxfn2" podUID="a97dcda8-caec-41ca-b8a2-1406403f9fa1"
	Apr 29 18:49:03 addons-442400 kubelet[2128]: I0429 18:49:03.632119    2128 scope.go:117] "RemoveContainer" containerID="d676f03050be6c8139647deaa2750da9b206a2805dc83376b69358d4cc6f6889"
	Apr 29 18:49:03 addons-442400 kubelet[2128]: I0429 18:49:03.676456    2128 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkmc8\" (UniqueName: \"kubernetes.io/projected/88fefac1-a788-4a3d-9774-10960137a07d-kube-api-access-mkmc8\") pod \"88fefac1-a788-4a3d-9774-10960137a07d\" (UID: \"88fefac1-a788-4a3d-9774-10960137a07d\") "
	Apr 29 18:49:03 addons-442400 kubelet[2128]: I0429 18:49:03.682079    2128 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fefac1-a788-4a3d-9774-10960137a07d-kube-api-access-mkmc8" (OuterVolumeSpecName: "kube-api-access-mkmc8") pod "88fefac1-a788-4a3d-9774-10960137a07d" (UID: "88fefac1-a788-4a3d-9774-10960137a07d"). InnerVolumeSpecName "kube-api-access-mkmc8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 18:49:03 addons-442400 kubelet[2128]: I0429 18:49:03.778458    2128 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mkmc8\" (UniqueName: \"kubernetes.io/projected/88fefac1-a788-4a3d-9774-10960137a07d-kube-api-access-mkmc8\") on node \"addons-442400\" DevicePath \"\""
	Apr 29 18:49:06 addons-442400 kubelet[2128]: I0429 18:49:06.595348    2128 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-fhh92" secret="" err="secret \"gcp-auth\" not found"
	Apr 29 18:49:06 addons-442400 kubelet[2128]: I0429 18:49:06.626045    2128 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88fefac1-a788-4a3d-9774-10960137a07d" path="/var/lib/kubelet/pods/88fefac1-a788-4a3d-9774-10960137a07d/volumes"
	
	
	==> storage-provisioner [07b30da5b7c0] <==
	I0429 18:45:17.351943       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 18:45:17.994438       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 18:45:17.994533       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 18:45:18.501581       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 18:45:18.501771       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-442400_6322f88d-fa32-413c-9ee2-10959477691e!
	I0429 18:45:18.501867       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9123f660-3862-45c7-9a64-718a39dc411e", APIVersion:"v1", ResourceVersion:"571", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-442400_6322f88d-fa32-413c-9ee2-10959477691e became leader
	I0429 18:45:18.715993       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-442400_6322f88d-fa32-413c-9ee2-10959477691e!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 18:48:58.092343    2004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-442400 -n addons-442400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-442400 -n addons-442400: (13.2877067s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-442400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-w8nwn ingress-nginx-admission-patch-mkg6k
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-442400 describe pod ingress-nginx-admission-create-w8nwn ingress-nginx-admission-patch-mkg6k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-442400 describe pod ingress-nginx-admission-create-w8nwn ingress-nginx-admission-patch-mkg6k: exit status 1 (187.9221ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-w8nwn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mkg6k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-442400 describe pod ingress-nginx-admission-create-w8nwn ingress-nginx-admission-patch-mkg6k: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.19s)

                                                
                                    
x
+
TestErrorSpam/setup (201.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-472100 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 --driver=hyperv
E0429 18:53:10.170711   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:10.186290   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:10.201411   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:10.233301   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:10.279258   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:10.376670   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:10.551052   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:10.884822   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:11.538720   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:12.823786   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:15.386732   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:20.507522   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:30.754130   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:53:51.250562   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:54:32.222593   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 18:55:54.151456   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-472100 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 --driver=hyperv: (3m21.7796015s)
error_spam_test.go:96: unexpected stderr: "W0429 18:52:36.002164    1544 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-472100] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=18774
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-472100" primary control-plane node in "nospam-472100" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-472100" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0429 18:52:36.002164    1544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (201.78s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (34.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-980800 -n functional-980800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-980800 -n functional-980800: (12.2972821s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 logs -n 25: (8.9048058s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-472100 --log_dir                                     | nospam-472100     | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:57 UTC | 29 Apr 24 18:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-472100 --log_dir                                     | nospam-472100     | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:57 UTC | 29 Apr 24 18:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-472100 --log_dir                                     | nospam-472100     | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:57 UTC | 29 Apr 24 18:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-472100 --log_dir                                     | nospam-472100     | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:57 UTC | 29 Apr 24 18:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-472100 --log_dir                                     | nospam-472100     | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:57 UTC | 29 Apr 24 18:58 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-472100 --log_dir                                     | nospam-472100     | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:58 UTC | 29 Apr 24 18:58 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-472100 --log_dir                                     | nospam-472100     | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:58 UTC | 29 Apr 24 18:58 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-472100                                            | nospam-472100     | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:58 UTC | 29 Apr 24 18:59 UTC |
	| start   | -p functional-980800                                        | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:59 UTC | 29 Apr 24 19:03 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-980800                                        | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:03 UTC | 29 Apr 24 19:05 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-980800 cache add                                 | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:05 UTC | 29 Apr 24 19:05 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-980800 cache add                                 | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:05 UTC | 29 Apr 24 19:05 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-980800 cache add                                 | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:05 UTC | 29 Apr 24 19:05 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-980800 cache add                                 | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:05 UTC | 29 Apr 24 19:05 UTC |
	|         | minikube-local-cache-test:functional-980800                 |                   |                   |         |                     |                     |
	| cache   | functional-980800 cache delete                              | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:05 UTC | 29 Apr 24 19:05 UTC |
	|         | minikube-local-cache-test:functional-980800                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:05 UTC | 29 Apr 24 19:05 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:05 UTC | 29 Apr 24 19:05 UTC |
	| ssh     | functional-980800 ssh sudo                                  | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:05 UTC | 29 Apr 24 19:06 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-980800                                           | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:06 UTC | 29 Apr 24 19:06 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-980800 ssh                                       | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:06 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-980800 cache reload                              | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:06 UTC | 29 Apr 24 19:06 UTC |
	| ssh     | functional-980800 ssh                                       | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:06 UTC | 29 Apr 24 19:06 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:06 UTC | 29 Apr 24 19:06 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:06 UTC | 29 Apr 24 19:06 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-980800 kubectl --                                | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:06 UTC | 29 Apr 24 19:06 UTC |
	|         | --context functional-980800                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:03:08
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:03:08.870087    4360 out.go:291] Setting OutFile to fd 636 ...
	I0429 19:03:08.871099    4360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:03:08.871099    4360 out.go:304] Setting ErrFile to fd 952...
	I0429 19:03:08.871099    4360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:03:08.894004    4360 out.go:298] Setting JSON to false
	I0429 19:03:08.897843    4360 start.go:129] hostinfo: {"hostname":"minikube6","uptime":19328,"bootTime":1714398060,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 19:03:08.897843    4360 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 19:03:08.902648    4360 out.go:177] * [functional-980800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 19:03:08.906195    4360 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:03:08.906195    4360 notify.go:220] Checking for updates...
	I0429 19:03:08.910620    4360 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:03:08.913030    4360 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 19:03:08.916433    4360 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:03:08.918758    4360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:03:08.922642    4360 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:03:08.922642    4360 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:03:14.391661    4360 out.go:177] * Using the hyperv driver based on existing profile
	I0429 19:03:14.395732    4360 start.go:297] selected driver: hyperv
	I0429 19:03:14.395732    4360 start.go:901] validating driver "hyperv" against &{Name:functional-980800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-980800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.245.90 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:03:14.395732    4360 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:03:14.451142    4360 cni.go:84] Creating CNI manager for ""
	I0429 19:03:14.451142    4360 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 19:03:14.451677    4360 start.go:340] cluster config:
	{Name:functional-980800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-980800 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.245.90 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:03:14.452125    4360 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:03:14.456384    4360 out.go:177] * Starting "functional-980800" primary control-plane node in "functional-980800" cluster
	I0429 19:03:14.458975    4360 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 19:03:14.459045    4360 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 19:03:14.459045    4360 cache.go:56] Caching tarball of preloaded images
	I0429 19:03:14.459045    4360 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 19:03:14.459652    4360 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 19:03:14.459932    4360 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\config.json ...
	I0429 19:03:14.462383    4360 start.go:360] acquireMachinesLock for functional-980800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:03:14.462598    4360 start.go:364] duration metric: took 124.2µs to acquireMachinesLock for "functional-980800"
	I0429 19:03:14.462811    4360 start.go:96] Skipping create...Using existing machine configuration
	I0429 19:03:14.462894    4360 fix.go:54] fixHost starting: 
	I0429 19:03:14.463299    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:03:17.200262    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:03:17.200262    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:17.201136    4360 fix.go:112] recreateIfNeeded on functional-980800: state=Running err=<nil>
	W0429 19:03:17.201136    4360 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 19:03:17.204684    4360 out.go:177] * Updating the running hyperv "functional-980800" VM ...
	I0429 19:03:17.206785    4360 machine.go:94] provisionDockerMachine start ...
	I0429 19:03:17.206785    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:03:19.405494    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:03:19.405494    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:19.405605    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:03:22.051421    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:03:22.051944    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:22.058445    4360 main.go:141] libmachine: Using SSH client type: native
	I0429 19:03:22.059432    4360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.245.90 22 <nil> <nil>}
	I0429 19:03:22.059432    4360 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:03:22.209793    4360 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-980800
	
	I0429 19:03:22.209793    4360 buildroot.go:166] provisioning hostname "functional-980800"
	I0429 19:03:22.209793    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:03:24.338470    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:03:24.338741    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:24.338839    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:03:26.919691    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:03:26.919691    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:26.927026    4360 main.go:141] libmachine: Using SSH client type: native
	I0429 19:03:26.927026    4360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.245.90 22 <nil> <nil>}
	I0429 19:03:26.927606    4360 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-980800 && echo "functional-980800" | sudo tee /etc/hostname
	I0429 19:03:27.096701    4360 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-980800
	
	I0429 19:03:27.096701    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:03:29.269922    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:03:29.269922    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:29.270119    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:03:31.902025    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:03:31.902025    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:31.908535    4360 main.go:141] libmachine: Using SSH client type: native
	I0429 19:03:31.908983    4360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.245.90 22 <nil> <nil>}
	I0429 19:03:31.908983    4360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-980800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-980800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-980800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:03:32.064229    4360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:03:32.064293    4360 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 19:03:32.064355    4360 buildroot.go:174] setting up certificates
	I0429 19:03:32.064450    4360 provision.go:84] configureAuth start
	I0429 19:03:32.064450    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:03:34.203799    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:03:34.203799    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:34.204851    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:03:36.800910    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:03:36.802110    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:36.802198    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:03:38.965485    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:03:38.965485    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:38.965485    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:03:41.591197    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:03:41.591740    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:41.591740    4360 provision.go:143] copyHostCerts
	I0429 19:03:41.591740    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 19:03:41.591740    4360 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 19:03:41.591740    4360 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 19:03:41.592717    4360 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 19:03:41.593927    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 19:03:41.594320    4360 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 19:03:41.594409    4360 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 19:03:41.594609    4360 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 19:03:41.596421    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 19:03:41.596781    4360 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 19:03:41.596781    4360 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 19:03:41.597204    4360 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 19:03:41.598065    4360 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-980800 san=[127.0.0.1 172.17.245.90 functional-980800 localhost minikube]
	I0429 19:03:41.741179    4360 provision.go:177] copyRemoteCerts
	I0429 19:03:41.754424    4360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:03:41.754424    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:03:43.888514    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:03:43.888514    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:43.888514    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:03:46.514860    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:03:46.515575    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:46.515666    4360 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
	I0429 19:03:46.631568    4360 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8770114s)
	I0429 19:03:46.631630    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 19:03:46.631878    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 19:03:46.688140    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 19:03:46.688686    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 19:03:46.738007    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 19:03:46.738523    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:03:46.792909    4360 provision.go:87] duration metric: took 14.728353s to configureAuth
	I0429 19:03:46.793040    4360 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:03:46.793636    4360 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:03:46.793741    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:03:48.981018    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:03:48.981077    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:48.981077    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:03:51.612916    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:03:51.612916    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:51.619412    4360 main.go:141] libmachine: Using SSH client type: native
	I0429 19:03:51.620086    4360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.245.90 22 <nil> <nil>}
	I0429 19:03:51.620086    4360 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 19:03:51.769896    4360 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 19:03:51.769896    4360 buildroot.go:70] root file system type: tmpfs
	I0429 19:03:51.770547    4360 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 19:03:51.770661    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:03:53.956108    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:03:53.956108    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:53.956959    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:03:56.548224    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:03:56.548224    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:56.555351    4360 main.go:141] libmachine: Using SSH client type: native
	I0429 19:03:56.555894    4360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.245.90 22 <nil> <nil>}
	I0429 19:03:56.556018    4360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 19:03:56.732349    4360 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 19:03:56.732920    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:03:58.874943    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:03:58.874943    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:03:58.875730    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:04:01.535515    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:04:01.535515    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:01.543004    4360 main.go:141] libmachine: Using SSH client type: native
	I0429 19:04:01.543442    4360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.245.90 22 <nil> <nil>}
	I0429 19:04:01.543442    4360 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 19:04:01.694328    4360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:04:01.694328    4360 machine.go:97] duration metric: took 44.4872234s to provisionDockerMachine
	I0429 19:04:01.694328    4360 start.go:293] postStartSetup for "functional-980800" (driver="hyperv")
	I0429 19:04:01.694328    4360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:04:01.709138    4360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:04:01.709138    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:04:03.884365    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:04:03.884593    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:03.884652    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:04:06.512914    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:04:06.512914    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:06.513878    4360 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
	I0429 19:04:06.628478    4360 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9184729s)
	I0429 19:04:06.642063    4360 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:04:06.650344    4360 command_runner.go:130] > NAME=Buildroot
	I0429 19:04:06.650487    4360 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 19:04:06.650487    4360 command_runner.go:130] > ID=buildroot
	I0429 19:04:06.650487    4360 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 19:04:06.650487    4360 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 19:04:06.650610    4360 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:04:06.650743    4360 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 19:04:06.651340    4360 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 19:04:06.653019    4360 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 19:04:06.653128    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 19:04:06.654812    4360 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\13756\hosts -> hosts in /etc/test/nested/copy/13756
	I0429 19:04:06.654926    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\13756\hosts -> /etc/test/nested/copy/13756/hosts
	I0429 19:04:06.670669    4360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13756
	I0429 19:04:06.692696    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 19:04:06.746970    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\13756\hosts --> /etc/test/nested/copy/13756/hosts (40 bytes)
	I0429 19:04:06.799265    4360 start.go:296] duration metric: took 5.104751s for postStartSetup
	I0429 19:04:06.799265    4360 fix.go:56] duration metric: took 52.336079s for fixHost
	I0429 19:04:06.799447    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:04:08.951870    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:04:08.952615    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:08.952615    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:04:11.561463    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:04:11.561605    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:11.567426    4360 main.go:141] libmachine: Using SSH client type: native
	I0429 19:04:11.568249    4360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.245.90 22 <nil> <nil>}
	I0429 19:04:11.568249    4360 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:04:11.719847    4360 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714417451.727268191
	
	I0429 19:04:11.719847    4360 fix.go:216] guest clock: 1714417451.727268191
	I0429 19:04:11.719847    4360 fix.go:229] Guest: 2024-04-29 19:04:11.727268191 +0000 UTC Remote: 2024-04-29 19:04:06.7992652 +0000 UTC m=+58.122140301 (delta=4.928002991s)
	I0429 19:04:11.720121    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:04:13.863918    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:04:13.864251    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:13.864356    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:04:16.497774    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:04:16.497774    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:16.504415    4360 main.go:141] libmachine: Using SSH client type: native
	I0429 19:04:16.504415    4360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.245.90 22 <nil> <nil>}
	I0429 19:04:16.504415    4360 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714417451
	I0429 19:04:16.666034    4360 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 19:04:11 UTC 2024
	
	I0429 19:04:16.666034    4360 fix.go:236] clock set: Mon Apr 29 19:04:11 UTC 2024
	 (err=<nil>)
	I0429 19:04:16.666034    4360 start.go:83] releasing machines lock for "functional-980800", held for 1m2.2029904s
	I0429 19:04:16.666399    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:04:18.810275    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:04:18.810359    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:18.810428    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:04:21.434525    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:04:21.435685    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:21.440618    4360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:04:21.440618    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:04:21.452493    4360 ssh_runner.go:195] Run: cat /version.json
	I0429 19:04:21.452493    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:04:23.666268    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:04:23.667095    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:23.667095    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:04:23.667095    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:23.667095    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:04:23.667095    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:04:26.431874    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:04:26.431874    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:26.432689    4360 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
	I0429 19:04:26.461732    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:04:26.461732    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:04:26.461732    4360 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
	I0429 19:04:26.538595    4360 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 19:04:26.538652    4360 ssh_runner.go:235] Completed: cat /version.json: (5.0861225s)
	I0429 19:04:26.553141    4360 ssh_runner.go:195] Run: systemctl --version
	I0429 19:04:26.608863    4360 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 19:04:26.608863    4360 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1682081s)
	I0429 19:04:26.609908    4360 command_runner.go:130] > systemd 252 (252)
	I0429 19:04:26.609908    4360 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 19:04:26.624067    4360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 19:04:26.631927    4360 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 19:04:26.632945    4360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:04:26.647736    4360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:04:26.671694    4360 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 19:04:26.672029    4360 start.go:494] detecting cgroup driver to use...
	I0429 19:04:26.672359    4360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:04:26.724117    4360 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 19:04:26.739402    4360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 19:04:26.782226    4360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 19:04:26.803973    4360 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 19:04:26.818520    4360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 19:04:26.854796    4360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:04:26.895347    4360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 19:04:26.939140    4360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:04:26.978277    4360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:04:27.020200    4360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 19:04:27.058410    4360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 19:04:27.095307    4360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 19:04:27.132243    4360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:04:27.155324    4360 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 19:04:27.170268    4360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:04:27.203136    4360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:04:27.510799    4360 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 19:04:27.543955    4360 start.go:494] detecting cgroup driver to use...
	I0429 19:04:27.559822    4360 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 19:04:27.593228    4360 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 19:04:27.593297    4360 command_runner.go:130] > [Unit]
	I0429 19:04:27.593297    4360 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 19:04:27.593336    4360 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 19:04:27.593336    4360 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 19:04:27.593336    4360 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 19:04:27.593404    4360 command_runner.go:130] > StartLimitBurst=3
	I0429 19:04:27.593404    4360 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 19:04:27.593404    4360 command_runner.go:130] > [Service]
	I0429 19:04:27.593404    4360 command_runner.go:130] > Type=notify
	I0429 19:04:27.593453    4360 command_runner.go:130] > Restart=on-failure
	I0429 19:04:27.593453    4360 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 19:04:27.593503    4360 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 19:04:27.593503    4360 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 19:04:27.593546    4360 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 19:04:27.593546    4360 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 19:04:27.593546    4360 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 19:04:27.593602    4360 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 19:04:27.593602    4360 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 19:04:27.593645    4360 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 19:04:27.593645    4360 command_runner.go:130] > ExecStart=
	I0429 19:04:27.593691    4360 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 19:04:27.593691    4360 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 19:04:27.593751    4360 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 19:04:27.593751    4360 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 19:04:27.593751    4360 command_runner.go:130] > LimitNOFILE=infinity
	I0429 19:04:27.593814    4360 command_runner.go:130] > LimitNPROC=infinity
	I0429 19:04:27.593814    4360 command_runner.go:130] > LimitCORE=infinity
	I0429 19:04:27.593814    4360 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 19:04:27.593855    4360 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 19:04:27.593855    4360 command_runner.go:130] > TasksMax=infinity
	I0429 19:04:27.593855    4360 command_runner.go:130] > TimeoutStartSec=0
	I0429 19:04:27.593855    4360 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 19:04:27.593926    4360 command_runner.go:130] > Delegate=yes
	I0429 19:04:27.593926    4360 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 19:04:27.593926    4360 command_runner.go:130] > KillMode=process
	I0429 19:04:27.593987    4360 command_runner.go:130] > [Install]
	I0429 19:04:27.593987    4360 command_runner.go:130] > WantedBy=multi-user.target
	I0429 19:04:27.609486    4360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:04:27.647818    4360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:04:27.706112    4360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:04:27.758574    4360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:04:27.786409    4360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:04:27.827157    4360 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 19:04:27.841110    4360 ssh_runner.go:195] Run: which cri-dockerd
	I0429 19:04:27.847185    4360 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 19:04:27.860878    4360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 19:04:27.881509    4360 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 19:04:27.934864    4360 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 19:04:28.252275    4360 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 19:04:28.525740    4360 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 19:04:28.526050    4360 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 19:04:28.580384    4360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:04:28.880521    4360 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 19:04:41.840571    4360 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.9599113s)
	I0429 19:04:41.855757    4360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 19:04:41.904292    4360 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0429 19:04:41.963833    4360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:04:42.012748    4360 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 19:04:42.259136    4360 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 19:04:42.518285    4360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:04:42.756689    4360 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 19:04:42.804574    4360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:04:42.849019    4360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:04:43.119134    4360 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 19:04:43.276613    4360 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 19:04:43.293578    4360 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 19:04:43.303084    4360 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 19:04:43.303168    4360 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 19:04:43.303168    4360 command_runner.go:130] > Device: 0,22	Inode: 1454        Links: 1
	I0429 19:04:43.303168    4360 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 19:04:43.303168    4360 command_runner.go:130] > Access: 2024-04-29 19:04:43.163671373 +0000
	I0429 19:04:43.303168    4360 command_runner.go:130] > Modify: 2024-04-29 19:04:43.163671373 +0000
	I0429 19:04:43.303168    4360 command_runner.go:130] > Change: 2024-04-29 19:04:43.167672081 +0000
	I0429 19:04:43.303168    4360 command_runner.go:130] >  Birth: -
	I0429 19:04:43.303168    4360 start.go:562] Will wait 60s for crictl version
	I0429 19:04:43.316535    4360 ssh_runner.go:195] Run: which crictl
	I0429 19:04:43.321747    4360 command_runner.go:130] > /usr/bin/crictl
	I0429 19:04:43.335294    4360 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:04:43.390542    4360 command_runner.go:130] > Version:  0.1.0
	I0429 19:04:43.390542    4360 command_runner.go:130] > RuntimeName:  docker
	I0429 19:04:43.390542    4360 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 19:04:43.390542    4360 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 19:04:43.390711    4360 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 19:04:43.402320    4360 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:04:43.436694    4360 command_runner.go:130] > 26.0.2
	I0429 19:04:43.449100    4360 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:04:43.484558    4360 command_runner.go:130] > 26.0.2
	I0429 19:04:43.490072    4360 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 19:04:43.490270    4360 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 19:04:43.495289    4360 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 19:04:43.495289    4360 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 19:04:43.495289    4360 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 19:04:43.495289    4360 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 19:04:43.498516    4360 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 19:04:43.498516    4360 ip.go:210] interface addr: 172.17.240.1/20
	I0429 19:04:43.511474    4360 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 19:04:43.519359    4360 command_runner.go:130] > 172.17.240.1	host.minikube.internal
	I0429 19:04:43.519783    4360 kubeadm.go:877] updating cluster {Name:functional-980800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.0 ClusterName:functional-980800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.245.90 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:04:43.519966    4360 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 19:04:43.530782    4360 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 19:04:43.556051    4360 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 19:04:43.556923    4360 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 19:04:43.556923    4360 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 19:04:43.556923    4360 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 19:04:43.556923    4360 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 19:04:43.556923    4360 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 19:04:43.556923    4360 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 19:04:43.557034    4360 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:04:43.557034    4360 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 19:04:43.557147    4360 docker.go:615] Images already preloaded, skipping extraction
	I0429 19:04:43.567755    4360 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 19:04:43.593007    4360 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 19:04:43.593007    4360 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 19:04:43.593007    4360 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 19:04:43.593104    4360 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 19:04:43.593190    4360 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 19:04:43.593190    4360 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 19:04:43.593190    4360 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 19:04:43.593190    4360 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:04:43.593252    4360 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 19:04:43.593331    4360 cache_images.go:84] Images are preloaded, skipping loading
	I0429 19:04:43.593378    4360 kubeadm.go:928] updating node { 172.17.245.90 8441 v1.30.0 docker true true} ...
	I0429 19:04:43.593444    4360 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-980800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.245.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-980800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:04:43.604945    4360 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 19:04:43.642302    4360 command_runner.go:130] > cgroupfs
	I0429 19:04:43.642302    4360 cni.go:84] Creating CNI manager for ""
	I0429 19:04:43.642302    4360 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 19:04:43.642302    4360 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:04:43.642302    4360 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.245.90 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-980800 NodeName:functional-980800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.245.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.245.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 19:04:43.643214    4360 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.245.90
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-980800"
	  kubeletExtraArgs:
	    node-ip: 172.17.245.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.245.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:04:43.656774    4360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:04:43.675799    4360 command_runner.go:130] > kubeadm
	I0429 19:04:43.675799    4360 command_runner.go:130] > kubectl
	I0429 19:04:43.675799    4360 command_runner.go:130] > kubelet
	I0429 19:04:43.675799    4360 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:04:43.690322    4360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 19:04:43.709486    4360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 19:04:43.743585    4360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:04:43.777163    4360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0429 19:04:43.825313    4360 ssh_runner.go:195] Run: grep 172.17.245.90	control-plane.minikube.internal$ /etc/hosts
	I0429 19:04:43.832712    4360 command_runner.go:130] > 172.17.245.90	control-plane.minikube.internal
	I0429 19:04:43.845915    4360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:04:44.101604    4360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:04:44.181052    4360 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800 for IP: 172.17.245.90
	I0429 19:04:44.181052    4360 certs.go:194] generating shared ca certs ...
	I0429 19:04:44.181184    4360 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:04:44.182188    4360 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 19:04:44.182188    4360 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 19:04:44.182188    4360 certs.go:256] generating profile certs ...
	I0429 19:04:44.183292    4360 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.key
	I0429 19:04:44.184254    4360 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\apiserver.key.0076c061
	I0429 19:04:44.184642    4360 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\proxy-client.key
	I0429 19:04:44.184669    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:04:44.184669    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:04:44.184669    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:04:44.184669    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:04:44.185279    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:04:44.185279    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:04:44.185279    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:04:44.185853    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:04:44.186538    4360 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 19:04:44.186538    4360 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 19:04:44.186538    4360 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 19:04:44.187188    4360 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 19:04:44.187566    4360 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 19:04:44.187566    4360 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 19:04:44.188609    4360 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 19:04:44.188833    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:04:44.188833    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 19:04:44.188833    4360 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 19:04:44.190078    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:04:44.259558    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 19:04:44.334504    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:04:44.393487    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:04:44.462478    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 19:04:44.519328    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 19:04:44.620993    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:04:44.703866    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:04:44.775846    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:04:44.835879    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 19:04:44.893000    4360 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 19:04:44.967013    4360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:04:45.039395    4360 ssh_runner.go:195] Run: openssl version
	I0429 19:04:45.057755    4360 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 19:04:45.075057    4360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:04:45.183526    4360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:04:45.192095    4360 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:04:45.192892    4360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:04:45.208482    4360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:04:45.220740    4360 command_runner.go:130] > b5213941
	I0429 19:04:45.237993    4360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:04:45.292341    4360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 19:04:45.335674    4360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 19:04:45.346423    4360 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 19:04:45.346900    4360 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 19:04:45.360217    4360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 19:04:45.371399    4360 command_runner.go:130] > 51391683
	I0429 19:04:45.384731    4360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 19:04:45.421058    4360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 19:04:45.466054    4360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 19:04:45.474373    4360 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 19:04:45.475819    4360 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 19:04:45.491505    4360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 19:04:45.502793    4360 command_runner.go:130] > 3ec20f2e
	I0429 19:04:45.518857    4360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:04:45.558948    4360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:04:45.575330    4360 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:04:45.575424    4360 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0429 19:04:45.575424    4360 command_runner.go:130] > Device: 8,1	Inode: 9431378     Links: 1
	I0429 19:04:45.575496    4360 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 19:04:45.575496    4360 command_runner.go:130] > Access: 2024-04-29 19:01:57.202324252 +0000
	I0429 19:04:45.575496    4360 command_runner.go:130] > Modify: 2024-04-29 19:01:57.202324252 +0000
	I0429 19:04:45.575496    4360 command_runner.go:130] > Change: 2024-04-29 19:01:57.202324252 +0000
	I0429 19:04:45.575496    4360 command_runner.go:130] >  Birth: 2024-04-29 19:01:57.202324252 +0000
	I0429 19:04:45.591889    4360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 19:04:45.604179    4360 command_runner.go:130] > Certificate will not expire
	I0429 19:04:45.619311    4360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 19:04:45.631495    4360 command_runner.go:130] > Certificate will not expire
	I0429 19:04:45.646684    4360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 19:04:45.659178    4360 command_runner.go:130] > Certificate will not expire
	I0429 19:04:45.675016    4360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 19:04:45.694140    4360 command_runner.go:130] > Certificate will not expire
	I0429 19:04:45.709758    4360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 19:04:45.724875    4360 command_runner.go:130] > Certificate will not expire
	I0429 19:04:45.739929    4360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 19:04:45.760199    4360 command_runner.go:130] > Certificate will not expire
	I0429 19:04:45.760199    4360 kubeadm.go:391] StartCluster: {Name:functional-980800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:functional-980800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.245.90 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:04:45.773891    4360 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 19:04:45.882973    4360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 19:04:45.929286    4360 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0429 19:04:45.929286    4360 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0429 19:04:45.929286    4360 command_runner.go:130] > /var/lib/minikube/etcd:
	I0429 19:04:45.929286    4360 command_runner.go:130] > member
	W0429 19:04:45.929286    4360 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 19:04:45.929286    4360 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 19:04:45.929286    4360 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 19:04:45.944834    4360 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 19:04:45.965241    4360 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:04:45.967127    4360 kubeconfig.go:125] found "functional-980800" server: "https://172.17.245.90:8441"
	I0429 19:04:45.968502    4360 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:04:45.969348    4360 kapi.go:59] client config for functional-980800: &rest.Config{Host:"https://172.17.245.90:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-980800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-980800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 19:04:45.970933    4360 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 19:04:45.983112    4360 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 19:04:46.003778    4360 kubeadm.go:624] The running cluster does not require reconfiguration: 172.17.245.90
	I0429 19:04:46.003778    4360 kubeadm.go:1154] stopping kube-system containers ...
	I0429 19:04:46.014667    4360 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 19:04:46.120235    4360 command_runner.go:130] > 7707b7f5ceef
	I0429 19:04:46.120235    4360 command_runner.go:130] > 88f0fd37692e
	I0429 19:04:46.120235    4360 command_runner.go:130] > dc1b917dab24
	I0429 19:04:46.120235    4360 command_runner.go:130] > ccb91f2068b7
	I0429 19:04:46.120235    4360 command_runner.go:130] > 756458d8706d
	I0429 19:04:46.120235    4360 command_runner.go:130] > acc7c4a166a5
	I0429 19:04:46.120235    4360 command_runner.go:130] > 87df54d365f1
	I0429 19:04:46.120235    4360 command_runner.go:130] > 5bb24a35611d
	I0429 19:04:46.120235    4360 command_runner.go:130] > 4468bf580a3b
	I0429 19:04:46.120235    4360 command_runner.go:130] > 4884ca15aeb0
	I0429 19:04:46.120235    4360 command_runner.go:130] > dd83cbf52390
	I0429 19:04:46.120235    4360 command_runner.go:130] > 9a0d6837fcbe
	I0429 19:04:46.120235    4360 command_runner.go:130] > c9e96bbd1b75
	I0429 19:04:46.120235    4360 command_runner.go:130] > 5ca9bb2fa00f
	I0429 19:04:46.120235    4360 command_runner.go:130] > c460efaeec54
	I0429 19:04:46.120235    4360 command_runner.go:130] > 9dbe7ffdee63
	I0429 19:04:46.120235    4360 command_runner.go:130] > 5d6be6406e80
	I0429 19:04:46.120235    4360 command_runner.go:130] > 865935eb57b3
	I0429 19:04:46.120235    4360 command_runner.go:130] > d9a6b54a3817
	I0429 19:04:46.120235    4360 command_runner.go:130] > fd96ca31e28c
	I0429 19:04:46.120235    4360 command_runner.go:130] > 7708db44faab
	I0429 19:04:46.120235    4360 command_runner.go:130] > 522419dc2470
	I0429 19:04:46.120235    4360 command_runner.go:130] > cd09f96bd3d4
	I0429 19:04:46.120235    4360 command_runner.go:130] > 7d87d7f61643
	I0429 19:04:46.120235    4360 command_runner.go:130] > 492e621747b4
	I0429 19:04:46.120235    4360 command_runner.go:130] > 61a56da4e0bd
	I0429 19:04:46.120235    4360 command_runner.go:130] > 7d818c02c2a7
	I0429 19:04:46.120235    4360 command_runner.go:130] > 50dcbdac6164
	I0429 19:04:46.120235    4360 command_runner.go:130] > 2c3c3cc5de2f
	I0429 19:04:46.120235    4360 docker.go:483] Stopping containers: [7707b7f5ceef 88f0fd37692e dc1b917dab24 ccb91f2068b7 756458d8706d acc7c4a166a5 87df54d365f1 5bb24a35611d 4468bf580a3b 4884ca15aeb0 dd83cbf52390 9a0d6837fcbe c9e96bbd1b75 5ca9bb2fa00f c460efaeec54 9dbe7ffdee63 5d6be6406e80 865935eb57b3 d9a6b54a3817 fd96ca31e28c 7708db44faab 522419dc2470 cd09f96bd3d4 7d87d7f61643 492e621747b4 61a56da4e0bd 7d818c02c2a7 50dcbdac6164 2c3c3cc5de2f]
	I0429 19:04:46.132386    4360 ssh_runner.go:195] Run: docker stop 7707b7f5ceef 88f0fd37692e dc1b917dab24 ccb91f2068b7 756458d8706d acc7c4a166a5 87df54d365f1 5bb24a35611d 4468bf580a3b 4884ca15aeb0 dd83cbf52390 9a0d6837fcbe c9e96bbd1b75 5ca9bb2fa00f c460efaeec54 9dbe7ffdee63 5d6be6406e80 865935eb57b3 d9a6b54a3817 fd96ca31e28c 7708db44faab 522419dc2470 cd09f96bd3d4 7d87d7f61643 492e621747b4 61a56da4e0bd 7d818c02c2a7 50dcbdac6164 2c3c3cc5de2f
	I0429 19:04:48.636509    4360 command_runner.go:130] > 7707b7f5ceef
	I0429 19:04:48.636509    4360 command_runner.go:130] > 88f0fd37692e
	I0429 19:04:48.636509    4360 command_runner.go:130] > dc1b917dab24
	I0429 19:04:48.636509    4360 command_runner.go:130] > ccb91f2068b7
	I0429 19:04:48.636509    4360 command_runner.go:130] > 756458d8706d
	I0429 19:04:48.636509    4360 command_runner.go:130] > acc7c4a166a5
	I0429 19:04:48.636509    4360 command_runner.go:130] > 87df54d365f1
	I0429 19:04:48.636509    4360 command_runner.go:130] > 5bb24a35611d
	I0429 19:04:48.636509    4360 command_runner.go:130] > 4468bf580a3b
	I0429 19:04:48.636509    4360 command_runner.go:130] > 4884ca15aeb0
	I0429 19:04:48.636509    4360 command_runner.go:130] > dd83cbf52390
	I0429 19:04:48.636509    4360 command_runner.go:130] > 9a0d6837fcbe
	I0429 19:04:48.636509    4360 command_runner.go:130] > c9e96bbd1b75
	I0429 19:04:48.636509    4360 command_runner.go:130] > 5ca9bb2fa00f
	I0429 19:04:48.636509    4360 command_runner.go:130] > c460efaeec54
	I0429 19:04:48.636509    4360 command_runner.go:130] > 9dbe7ffdee63
	I0429 19:04:48.636509    4360 command_runner.go:130] > 5d6be6406e80
	I0429 19:04:48.636509    4360 command_runner.go:130] > 865935eb57b3
	I0429 19:04:48.636509    4360 command_runner.go:130] > d9a6b54a3817
	I0429 19:04:48.636509    4360 command_runner.go:130] > fd96ca31e28c
	I0429 19:04:48.636509    4360 command_runner.go:130] > 7708db44faab
	I0429 19:04:48.636509    4360 command_runner.go:130] > 522419dc2470
	I0429 19:04:48.636509    4360 command_runner.go:130] > cd09f96bd3d4
	I0429 19:04:48.636509    4360 command_runner.go:130] > 7d87d7f61643
	I0429 19:04:48.636509    4360 command_runner.go:130] > 492e621747b4
	I0429 19:04:48.636509    4360 command_runner.go:130] > 61a56da4e0bd
	I0429 19:04:48.636509    4360 command_runner.go:130] > 7d818c02c2a7
	I0429 19:04:48.636509    4360 command_runner.go:130] > 50dcbdac6164
	I0429 19:04:48.636509    4360 command_runner.go:130] > 2c3c3cc5de2f
	I0429 19:04:48.636509    4360 ssh_runner.go:235] Completed: docker stop 7707b7f5ceef 88f0fd37692e dc1b917dab24 ccb91f2068b7 756458d8706d acc7c4a166a5 87df54d365f1 5bb24a35611d 4468bf580a3b 4884ca15aeb0 dd83cbf52390 9a0d6837fcbe c9e96bbd1b75 5ca9bb2fa00f c460efaeec54 9dbe7ffdee63 5d6be6406e80 865935eb57b3 d9a6b54a3817 fd96ca31e28c 7708db44faab 522419dc2470 cd09f96bd3d4 7d87d7f61643 492e621747b4 61a56da4e0bd 7d818c02c2a7 50dcbdac6164 2c3c3cc5de2f: (2.5041052s)
	I0429 19:04:48.651459    4360 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 19:04:48.727648    4360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 19:04:48.755863    4360 command_runner.go:130] > -rw------- 1 root root 5647 Apr 29 19:02 /etc/kubernetes/admin.conf
	I0429 19:04:48.756048    4360 command_runner.go:130] > -rw------- 1 root root 5657 Apr 29 19:02 /etc/kubernetes/controller-manager.conf
	I0429 19:04:48.756091    4360 command_runner.go:130] > -rw------- 1 root root 2007 Apr 29 19:02 /etc/kubernetes/kubelet.conf
	I0429 19:04:48.756091    4360 command_runner.go:130] > -rw------- 1 root root 5601 Apr 29 19:02 /etc/kubernetes/scheduler.conf
	I0429 19:04:48.757497    4360 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Apr 29 19:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Apr 29 19:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Apr 29 19:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Apr 29 19:02 /etc/kubernetes/scheduler.conf
	
	I0429 19:04:48.771541    4360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0429 19:04:48.793908    4360 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0429 19:04:48.811347    4360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0429 19:04:48.830429    4360 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0429 19:04:48.841498    4360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0429 19:04:48.867330    4360 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:04:48.883060    4360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 19:04:48.920051    4360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0429 19:04:48.938033    4360 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:04:48.952718    4360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 19:04:48.997349    4360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 19:04:49.016423    4360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 19:04:49.226465    4360 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 19:04:49.227366    4360 command_runner.go:130] > [certs] Using the existing "sa" key
	I0429 19:04:49.227366    4360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:04:49.396416    4360 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 19:04:49.877889    4360 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0429 19:04:50.114205    4360 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0429 19:04:50.394385    4360 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0429 19:04:50.555930    4360 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 19:04:51.077655    4360 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 19:04:51.081224    4360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.8538446s)
	I0429 19:04:51.081312    4360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:04:51.480040    4360 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 19:04:51.480040    4360 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 19:04:51.480040    4360 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 19:04:51.480242    4360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:04:51.579297    4360 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 19:04:51.579297    4360 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 19:04:51.579297    4360 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 19:04:51.580159    4360 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 19:04:51.580159    4360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:04:51.709543    4360 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 19:04:51.711721    4360 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:04:51.727644    4360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:04:52.233668    4360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:04:52.740279    4360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:04:53.237763    4360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:04:53.727485    4360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:04:53.756479    4360 command_runner.go:130] > 6054
	I0429 19:04:53.757273    4360 api_server.go:72] duration metric: took 2.0457255s to wait for apiserver process to appear ...
	I0429 19:04:53.757347    4360 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:04:53.757424    4360 api_server.go:253] Checking apiserver healthz at https://172.17.245.90:8441/healthz ...
	I0429 19:04:56.379750    4360 api_server.go:279] https://172.17.245.90:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 19:04:56.380679    4360 api_server.go:103] status: https://172.17.245.90:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 19:04:56.380762    4360 api_server.go:253] Checking apiserver healthz at https://172.17.245.90:8441/healthz ...
	I0429 19:04:56.429321    4360 api_server.go:279] https://172.17.245.90:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 19:04:56.429321    4360 api_server.go:103] status: https://172.17.245.90:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 19:04:56.764440    4360 api_server.go:253] Checking apiserver healthz at https://172.17.245.90:8441/healthz ...
	I0429 19:04:56.773760    4360 api_server.go:279] https://172.17.245.90:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:04:56.773760    4360 api_server.go:103] status: https://172.17.245.90:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:04:57.258470    4360 api_server.go:253] Checking apiserver healthz at https://172.17.245.90:8441/healthz ...
	I0429 19:04:57.270790    4360 api_server.go:279] https://172.17.245.90:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:04:57.270883    4360 api_server.go:103] status: https://172.17.245.90:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:04:57.767510    4360 api_server.go:253] Checking apiserver healthz at https://172.17.245.90:8441/healthz ...
	I0429 19:04:57.776630    4360 api_server.go:279] https://172.17.245.90:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:04:57.777069    4360 api_server.go:103] status: https://172.17.245.90:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:04:58.271606    4360 api_server.go:253] Checking apiserver healthz at https://172.17.245.90:8441/healthz ...
	I0429 19:04:58.279732    4360 api_server.go:279] https://172.17.245.90:8441/healthz returned 200:
	ok
	I0429 19:04:58.280327    4360 round_trippers.go:463] GET https://172.17.245.90:8441/version
	I0429 19:04:58.280501    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:58.280501    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:58.280501    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:58.290788    4360 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 19:04:58.290788    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:58.290788    4360 round_trippers.go:580]     Audit-Id: a6b3cec9-6656-48cf-bdf4-d04e12e37873
	I0429 19:04:58.290788    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:58.290788    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:58.290788    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:58.291530    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:58.291530    4360 round_trippers.go:580]     Content-Length: 263
	I0429 19:04:58.291530    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:58 GMT
	I0429 19:04:58.291530    4360 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 19:04:58.291827    4360 api_server.go:141] control plane version: v1.30.0
	I0429 19:04:58.291827    4360 api_server.go:131] duration metric: took 4.5344475s to wait for apiserver health ...
	I0429 19:04:58.292034    4360 cni.go:84] Creating CNI manager for ""
	I0429 19:04:58.292077    4360 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 19:04:58.294420    4360 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 19:04:58.309956    4360 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 19:04:58.333496    4360 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 19:04:58.373884    4360 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:04:58.374870    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods
	I0429 19:04:58.374870    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:58.374870    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:58.374870    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:58.389995    4360 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0429 19:04:58.389995    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:58.389995    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:58.389995    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:58.389995    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:58 GMT
	I0429 19:04:58.389995    4360 round_trippers.go:580]     Audit-Id: 2cb667a9-4122-4d2f-bf9f-0fb31f362fd9
	I0429 19:04:58.389995    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:58.389995    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:58.392368    4360 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-cqkc4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3","resourceVersion":"538","creationTimestamp":"2024-04-29T19:02:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d14cd212-4afb-4fd7-861a-cf7df764c17f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d14cd212-4afb-4fd7-861a-cf7df764c17f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51690 chars]
	I0429 19:04:58.397504    4360 system_pods.go:59] 7 kube-system pods found
	I0429 19:04:58.397608    4360 system_pods.go:61] "coredns-7db6d8ff4d-cqkc4" [41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3] Running
	I0429 19:04:58.397608    4360 system_pods.go:61] "etcd-functional-980800" [fc2416af-4d87-4476-8c96-d70e6320dac4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 19:04:58.397608    4360 system_pods.go:61] "kube-apiserver-functional-980800" [e6c4fa80-7b63-4e06-8813-594bd298a8dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 19:04:58.397608    4360 system_pods.go:61] "kube-controller-manager-functional-980800" [4b4efc39-d13c-4e21-8428-5e72f3ba655f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 19:04:58.397608    4360 system_pods.go:61] "kube-proxy-794mc" [da9d80f8-9325-46df-813b-1e3801cf3e88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 19:04:58.397608    4360 system_pods.go:61] "kube-scheduler-functional-980800" [ee11cc90-27fe-40dc-be40-86478d68cfc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 19:04:58.397719    4360 system_pods.go:61] "storage-provisioner" [cb1b2baa-391c-407a-a97d-23d3d0d29f13] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 19:04:58.397719    4360 system_pods.go:74] duration metric: took 23.8345ms to wait for pod list to return data ...
	I0429 19:04:58.397719    4360 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:04:58.397719    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes
	I0429 19:04:58.397719    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:58.397719    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:58.397719    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:58.405346    4360 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:04:58.405346    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:58.405346    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:58 GMT
	I0429 19:04:58.405346    4360 round_trippers.go:580]     Audit-Id: 88d6d854-e34f-407e-a298-91c53e6b574f
	I0429 19:04:58.405346    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:58.405346    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:58.405346    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:58.405346    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:58.410146    4360 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0429 19:04:58.411136    4360 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:04:58.411196    4360 node_conditions.go:123] node cpu capacity is 2
	I0429 19:04:58.411196    4360 node_conditions.go:105] duration metric: took 13.477ms to run NodePressure ...
	I0429 19:04:58.411274    4360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:04:58.796175    4360 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 19:04:58.796175    4360 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 19:04:58.796345    4360 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 19:04:58.796473    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0429 19:04:58.796564    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:58.796585    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:58.796585    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:58.800268    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:04:58.800268    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:58.800268    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:58.800268    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:58.800268    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:58.800268    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:58 GMT
	I0429 19:04:58.801214    4360 round_trippers.go:580]     Audit-Id: a868969c-16b5-45ac-8e74-92bdc6a06c7d
	I0429 19:04:58.801214    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:58.802193    4360 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"546"},"items":[{"metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30951 chars]
	I0429 19:04:58.803791    4360 kubeadm.go:733] kubelet initialised
	I0429 19:04:58.803791    4360 kubeadm.go:734] duration metric: took 7.4463ms waiting for restarted kubelet to initialise ...
	I0429 19:04:58.803791    4360 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:04:58.803791    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods
	I0429 19:04:58.803791    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:58.803791    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:58.803791    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:58.809384    4360 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:04:58.809384    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:58.809384    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:58.809384    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:58.809384    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:58.809384    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:58 GMT
	I0429 19:04:58.809384    4360 round_trippers.go:580]     Audit-Id: b9f9b032-23e5-442e-b2b9-1aab331ed497
	I0429 19:04:58.809384    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:58.810415    4360 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"546"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-cqkc4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3","resourceVersion":"538","creationTimestamp":"2024-04-29T19:02:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d14cd212-4afb-4fd7-861a-cf7df764c17f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d14cd212-4afb-4fd7-861a-cf7df764c17f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51292 chars]
	I0429 19:04:58.812393    4360 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-cqkc4" in "kube-system" namespace to be "Ready" ...
	I0429 19:04:58.812393    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cqkc4
	I0429 19:04:58.812393    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:58.812393    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:58.812393    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:58.815399    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:04:58.815399    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:58.815399    4360 round_trippers.go:580]     Audit-Id: f55883bd-f3ed-4f69-9f75-29b1d9ef3b0b
	I0429 19:04:58.816284    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:58.816326    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:58.816326    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:58.816532    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:58.816532    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:58 GMT
	I0429 19:04:58.816630    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-cqkc4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3","resourceVersion":"538","creationTimestamp":"2024-04-29T19:02:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d14cd212-4afb-4fd7-861a-cf7df764c17f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d14cd212-4afb-4fd7-861a-cf7df764c17f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0429 19:04:58.817180    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:04:58.817308    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:58.817351    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:58.817351    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:58.819421    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:04:58.820250    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:58.820250    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:58.820317    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:58.820317    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:58 GMT
	I0429 19:04:58.820317    4360 round_trippers.go:580]     Audit-Id: 1acdd5af-d9ea-4d21-8e9e-16a91d12378c
	I0429 19:04:58.820317    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:58.820317    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:58.820688    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:04:58.820842    4360 pod_ready.go:92] pod "coredns-7db6d8ff4d-cqkc4" in "kube-system" namespace has status "Ready":"True"
	I0429 19:04:58.820842    4360 pod_ready.go:81] duration metric: took 8.4491ms for pod "coredns-7db6d8ff4d-cqkc4" in "kube-system" namespace to be "Ready" ...
	I0429 19:04:58.820842    4360 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:04:58.820842    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:04:58.820842    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:58.820842    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:58.820842    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:58.823957    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:04:58.824448    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:58.824448    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:58.824448    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:58.824509    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:58.824509    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:58 GMT
	I0429 19:04:58.824509    4360 round_trippers.go:580]     Audit-Id: 4fdef729-6027-468d-ba05-a0b53ed39447
	I0429 19:04:58.824509    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:58.824687    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:04:58.825255    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:04:58.825255    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:58.825255    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:58.825255    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:58.828840    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:04:58.828840    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:58.828840    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:58.828840    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:58 GMT
	I0429 19:04:58.828840    4360 round_trippers.go:580]     Audit-Id: aa0eda4f-2475-41ad-aa3a-ba0817bbf4e5
	I0429 19:04:58.828840    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:58.828840    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:58.828840    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:58.829844    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:04:59.326402    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:04:59.326470    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:59.326470    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:59.326470    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:59.330715    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:04:59.331275    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:59.331275    4360 round_trippers.go:580]     Audit-Id: 98c0cddf-b355-4646-8a11-727758c08191
	I0429 19:04:59.331275    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:59.331367    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:59.331367    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:59.331367    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:59.331367    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:59 GMT
	I0429 19:04:59.331817    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:04:59.332803    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:04:59.332803    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:59.332864    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:59.332864    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:59.335486    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:04:59.335486    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:59.335486    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:59.335486    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:59 GMT
	I0429 19:04:59.335486    4360 round_trippers.go:580]     Audit-Id: 1d4fafbf-f555-4583-83ae-2a977394c010
	I0429 19:04:59.335486    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:59.335486    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:59.335486    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:59.336545    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:04:59.826459    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:04:59.826459    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:59.826459    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:59.826459    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:59.831042    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:04:59.831283    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:59.831283    4360 round_trippers.go:580]     Audit-Id: ade6908a-4481-41a2-a560-b6121a172029
	I0429 19:04:59.831283    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:59.831283    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:59.831283    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:59.831283    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:59.831283    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:59 GMT
	I0429 19:04:59.831507    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:04:59.832216    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:04:59.832216    4360 round_trippers.go:469] Request Headers:
	I0429 19:04:59.832216    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:04:59.832216    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:04:59.835601    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:04:59.835778    4360 round_trippers.go:577] Response Headers:
	I0429 19:04:59.835778    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:04:59.835778    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:04:59.835778    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:04:59.835778    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:04:59.835778    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:04:59 GMT
	I0429 19:04:59.835778    4360 round_trippers.go:580]     Audit-Id: 7e36abd6-b4c0-4d16-9989-190ed45a066e
	I0429 19:04:59.835959    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:00.326720    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:05:00.326720    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:00.326720    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:00.326720    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:00.330448    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:00.330448    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:00.330448    4360 round_trippers.go:580]     Audit-Id: 526161cd-37b3-482d-a72c-a3a9d6609b5a
	I0429 19:05:00.330604    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:00.330604    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:00.330604    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:00.330604    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:00.330604    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:00 GMT
	I0429 19:05:00.330896    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:05:00.331503    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:00.331503    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:00.331503    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:00.331503    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:00.335069    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:00.335069    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:00.335069    4360 round_trippers.go:580]     Audit-Id: 84f5b3ea-91c9-4621-8dd7-ca25dec6bb1e
	I0429 19:05:00.335069    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:00.335069    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:00.335069    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:00.335069    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:00.335069    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:00 GMT
	I0429 19:05:00.335069    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:00.823723    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:05:00.823723    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:00.823868    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:00.823868    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:00.827230    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:00.827747    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:00.827747    4360 round_trippers.go:580]     Audit-Id: 85b07255-f65f-41a4-acc1-1eb8d6dcd36d
	I0429 19:05:00.827747    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:00.827747    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:00.827791    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:00.827791    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:00.827791    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:00 GMT
	I0429 19:05:00.827909    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:05:00.829061    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:00.829126    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:00.829126    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:00.829126    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:00.831335    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:05:00.831335    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:00.831335    4360 round_trippers.go:580]     Audit-Id: 33150ef2-d6c1-4a4b-90a9-90517fda36e5
	I0429 19:05:00.831335    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:00.831335    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:00.831335    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:00.831335    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:00.831335    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:00 GMT
	I0429 19:05:00.831335    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:00.832857    4360 pod_ready.go:102] pod "etcd-functional-980800" in "kube-system" namespace has status "Ready":"False"
	I0429 19:05:01.324400    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:05:01.324664    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:01.324664    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:01.324664    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:01.328921    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:01.329483    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:01.329483    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:01.329483    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:01 GMT
	I0429 19:05:01.329483    4360 round_trippers.go:580]     Audit-Id: b73a1ee8-8b1b-44b8-8b7e-3dd02330140e
	I0429 19:05:01.329483    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:01.329483    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:01.329483    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:01.329694    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:05:01.330071    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:01.330071    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:01.330071    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:01.330071    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:01.331932    4360 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 19:05:01.331932    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:01.331932    4360 round_trippers.go:580]     Audit-Id: 1083f5c8-642a-43ce-aba9-0bc60d27bbe9
	I0429 19:05:01.331932    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:01.331932    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:01.331932    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:01.331932    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:01.331932    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:01 GMT
	I0429 19:05:01.337948    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:01.832230    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:05:01.832230    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:01.832230    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:01.832230    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:01.847438    4360 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0429 19:05:01.847438    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:01.847438    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:01.847438    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:01.847438    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:01 GMT
	I0429 19:05:01.847438    4360 round_trippers.go:580]     Audit-Id: 97a2b047-8d05-446e-a7e1-7303c27eca95
	I0429 19:05:01.847438    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:01.847438    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:01.847755    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:05:01.848600    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:01.848600    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:01.848600    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:01.848600    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:01.852968    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:01.852968    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:01.852968    4360 round_trippers.go:580]     Audit-Id: 528d6a35-5569-4c6c-b173-6b343a9845ed
	I0429 19:05:01.852968    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:01.853036    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:01.853036    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:01.853036    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:01.853036    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:01 GMT
	I0429 19:05:01.853402    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:02.336112    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:05:02.336316    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:02.336316    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:02.336316    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:02.340671    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:02.340671    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:02.340774    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:02 GMT
	I0429 19:05:02.340774    4360 round_trippers.go:580]     Audit-Id: 9ad523a1-9075-4778-a03d-a05c30cd2492
	I0429 19:05:02.340774    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:02.340774    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:02.340774    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:02.340774    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:02.340994    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:05:02.341218    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:02.341218    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:02.341218    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:02.341218    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:02.343896    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:05:02.343896    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:02.343896    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:02.343896    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:02.343896    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:02 GMT
	I0429 19:05:02.343896    4360 round_trippers.go:580]     Audit-Id: 68271d40-5b02-4058-9aea-91f11bcb4976
	I0429 19:05:02.343896    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:02.343896    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:02.344949    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:02.834816    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:05:02.834816    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:02.834881    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:02.834881    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:02.839819    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:02.840048    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:02.840048    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:02.840048    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:02.840048    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:02.840048    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:02.840048    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:02 GMT
	I0429 19:05:02.840048    4360 round_trippers.go:580]     Audit-Id: 9bc5b9ae-2afc-41be-bb55-c9dbb9430d7f
	I0429 19:05:02.840292    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:05:02.840503    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:02.840503    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:02.840503    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:02.840503    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:02.844250    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:02.845023    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:02.845023    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:02.845023    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:02.845023    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:02.845023    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:02 GMT
	I0429 19:05:02.845023    4360 round_trippers.go:580]     Audit-Id: 6ce9dc93-a598-4297-9a88-2774bf349a2b
	I0429 19:05:02.845097    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:02.845097    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:02.845765    4360 pod_ready.go:102] pod "etcd-functional-980800" in "kube-system" namespace has status "Ready":"False"
	I0429 19:05:03.322206    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:05:03.322206    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:03.322276    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:03.322276    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:03.326444    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:03.326444    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:03.326444    4360 round_trippers.go:580]     Audit-Id: 4a540bde-ef9b-4ea3-89c1-4b8df09fcc81
	I0429 19:05:03.326807    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:03.326807    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:03.326807    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:03.326807    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:03.326807    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:03 GMT
	I0429 19:05:03.326997    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:05:03.328149    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:03.328149    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:03.328349    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:03.328349    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:03.331521    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:03.331521    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:03.331521    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:03.331521    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:03.331521    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:03.331521    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:03.331521    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:03 GMT
	I0429 19:05:03.331521    4360 round_trippers.go:580]     Audit-Id: e1c8bd52-25c5-4c9f-9803-8de3dc764627
	I0429 19:05:03.333373    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:03.821415    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:05:03.821415    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:03.821415    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:03.821415    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:03.826018    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:03.826018    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:03.826018    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:03.826704    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:03 GMT
	I0429 19:05:03.826704    4360 round_trippers.go:580]     Audit-Id: 6c78f32c-c7fb-4d46-b45d-082265a16dca
	I0429 19:05:03.826704    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:03.826704    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:03.826704    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:03.827135    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"539","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0429 19:05:03.827767    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:03.827767    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:03.827767    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:03.827767    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:03.831245    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:03.831685    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:03.831685    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:03.831685    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:03.831685    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:03.831685    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:03.831685    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:03 GMT
	I0429 19:05:03.831685    4360 round_trippers.go:580]     Audit-Id: 34622250-da0a-4995-ac29-a2941ea3e010
	I0429 19:05:03.831685    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:04.323810    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:05:04.324145    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:04.324145    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:04.324145    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:04.330330    4360 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:05:04.330454    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:04.330454    4360 round_trippers.go:580]     Audit-Id: f79a8ab9-77ee-4df9-a8a6-4256cc9fc57b
	I0429 19:05:04.330454    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:04.330454    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:04.330454    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:04.330454    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:04.330454    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:04 GMT
	I0429 19:05:04.330593    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"593","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6369 chars]
	I0429 19:05:04.331838    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:04.331838    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:04.331933    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:04.331933    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:04.335239    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:04.335239    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:04.335239    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:04.335239    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:04.335239    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:04 GMT
	I0429 19:05:04.335239    4360 round_trippers.go:580]     Audit-Id: 8dc11e51-94c2-49c2-a8c2-2f45ffa78d42
	I0429 19:05:04.335239    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:04.335239    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:04.335239    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:04.335239    4360 pod_ready.go:92] pod "etcd-functional-980800" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:04.335239    4360 pod_ready.go:81] duration metric: took 5.5143578s for pod "etcd-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:04.336227    4360 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:04.336391    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:04.336469    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:04.336469    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:04.336469    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:04.340289    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:04.340344    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:04.340344    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:04.340344    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:04.340344    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:04.340344    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:04 GMT
	I0429 19:05:04.340410    4360 round_trippers.go:580]     Audit-Id: 8e841d87-d7e0-4528-a735-673ac2d4d15a
	I0429 19:05:04.340410    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:04.340601    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-980800","namespace":"kube-system","uid":"e6c4fa80-7b63-4e06-8813-594bd298a8dc","resourceVersion":"535","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.245.90:8441","kubernetes.io/config.hash":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.mirror":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.seen":"2024-04-29T19:02:02.743789732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8146 chars]
	I0429 19:05:04.341172    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:04.341172    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:04.341172    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:04.341172    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:04.343798    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:05:04.343798    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:04.343798    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:04.343798    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:04.343798    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:04.343798    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:04 GMT
	I0429 19:05:04.343798    4360 round_trippers.go:580]     Audit-Id: 70e52970-a9cd-48d4-a78f-402e54ca69be
	I0429 19:05:04.343798    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:04.343798    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:04.851144    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:04.851144    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:04.851144    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:04.851144    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:04.855724    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:04.855724    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:04.856270    4360 round_trippers.go:580]     Audit-Id: e1444ec5-dba2-46e2-863c-85832c3c849e
	I0429 19:05:04.856270    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:04.856270    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:04.856270    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:04.856270    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:04.856270    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:04 GMT
	I0429 19:05:04.856616    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-980800","namespace":"kube-system","uid":"e6c4fa80-7b63-4e06-8813-594bd298a8dc","resourceVersion":"535","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.245.90:8441","kubernetes.io/config.hash":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.mirror":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.seen":"2024-04-29T19:02:02.743789732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8146 chars]
	I0429 19:05:04.857422    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:04.857482    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:04.857482    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:04.857482    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:04.859726    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:05:04.859726    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:04.859726    4360 round_trippers.go:580]     Audit-Id: e373a6cf-a727-44a4-a972-7f1c669333ba
	I0429 19:05:04.859726    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:04.859726    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:04.859726    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:04.859726    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:04.860723    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:04 GMT
	I0429 19:05:04.860899    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:05.350618    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:05.350812    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:05.350812    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:05.350812    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:05.354516    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:05.355051    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:05.355051    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:05 GMT
	I0429 19:05:05.355051    4360 round_trippers.go:580]     Audit-Id: bb4f4040-2f8a-4318-bc8a-08e801ce9d58
	I0429 19:05:05.355051    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:05.355051    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:05.355051    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:05.355051    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:05.355051    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-980800","namespace":"kube-system","uid":"e6c4fa80-7b63-4e06-8813-594bd298a8dc","resourceVersion":"535","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.245.90:8441","kubernetes.io/config.hash":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.mirror":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.seen":"2024-04-29T19:02:02.743789732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8146 chars]
	I0429 19:05:05.355958    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:05.356073    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:05.356073    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:05.356073    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:05.359728    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:05.359728    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:05.359996    4360 round_trippers.go:580]     Audit-Id: 8515d93b-40a3-43ca-ae41-a8d76115dfdd
	I0429 19:05:05.359996    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:05.359996    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:05.359996    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:05.359996    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:05.359996    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:05 GMT
	I0429 19:05:05.360932    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:05.848645    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:05.848645    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:05.848645    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:05.848645    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:05.852788    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:05.853474    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:05.853474    4360 round_trippers.go:580]     Audit-Id: 3ebb7fe9-08f7-4560-b6f2-b5dc5fff161f
	I0429 19:05:05.853474    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:05.853474    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:05.853474    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:05.853575    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:05.853575    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:05 GMT
	I0429 19:05:05.853867    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-980800","namespace":"kube-system","uid":"e6c4fa80-7b63-4e06-8813-594bd298a8dc","resourceVersion":"535","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.245.90:8441","kubernetes.io/config.hash":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.mirror":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.seen":"2024-04-29T19:02:02.743789732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8146 chars]
	I0429 19:05:05.854675    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:05.854775    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:05.854775    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:05.854775    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:05.858147    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:05.858573    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:05.858573    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:05.858656    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:05 GMT
	I0429 19:05:05.858719    4360 round_trippers.go:580]     Audit-Id: d6495866-d29f-4ce3-8307-5d84c287c757
	I0429 19:05:05.858719    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:05.858719    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:05.858719    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:05.858719    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:06.350594    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:06.350594    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:06.350594    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:06.350594    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:06.354223    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:06.355250    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:06.355250    4360 round_trippers.go:580]     Audit-Id: 7979d5da-d483-4dff-9e69-cd8cf8363ef8
	I0429 19:05:06.355283    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:06.355283    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:06.355283    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:06.355283    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:06.355283    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:06 GMT
	I0429 19:05:06.355606    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-980800","namespace":"kube-system","uid":"e6c4fa80-7b63-4e06-8813-594bd298a8dc","resourceVersion":"535","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.245.90:8441","kubernetes.io/config.hash":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.mirror":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.seen":"2024-04-29T19:02:02.743789732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8146 chars]
	I0429 19:05:06.355891    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:06.355891    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:06.355891    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:06.355891    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:06.358976    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:05:06.358976    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:06.358976    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:06.358976    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:06 GMT
	I0429 19:05:06.358976    4360 round_trippers.go:580]     Audit-Id: f2f34c4c-b13e-45d1-a6c4-afa3e0539d2d
	I0429 19:05:06.358976    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:06.358976    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:06.358976    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:06.358976    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:06.359523    4360 pod_ready.go:102] pod "kube-apiserver-functional-980800" in "kube-system" namespace has status "Ready":"False"
	I0429 19:05:06.841015    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:06.841015    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:06.841015    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:06.841015    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:06.845615    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:06.845809    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:06.845809    4360 round_trippers.go:580]     Audit-Id: d0087ce3-02df-45c4-816e-64ec6a387490
	I0429 19:05:06.845883    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:06.845883    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:06.845883    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:06.845883    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:06.845883    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:06 GMT
	I0429 19:05:06.846800    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-980800","namespace":"kube-system","uid":"e6c4fa80-7b63-4e06-8813-594bd298a8dc","resourceVersion":"535","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.245.90:8441","kubernetes.io/config.hash":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.mirror":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.seen":"2024-04-29T19:02:02.743789732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8146 chars]
	I0429 19:05:06.847108    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:06.847108    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:06.847108    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:06.847108    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:06.850761    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:06.850859    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:06.850859    4360 round_trippers.go:580]     Audit-Id: 62769fc5-fdd3-4642-8c36-d5c87d002e9c
	I0429 19:05:06.850859    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:06.850859    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:06.850859    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:06.850859    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:06.850859    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:06 GMT
	I0429 19:05:06.851265    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:07.342141    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:07.342141    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:07.342141    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:07.342338    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:07.346996    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:07.346996    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:07.346996    4360 round_trippers.go:580]     Audit-Id: 72b79e32-02d4-4768-afb9-1e942785e827
	I0429 19:05:07.347098    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:07.347098    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:07.347098    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:07.347098    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:07.347098    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:07 GMT
	I0429 19:05:07.347655    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-980800","namespace":"kube-system","uid":"e6c4fa80-7b63-4e06-8813-594bd298a8dc","resourceVersion":"535","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.245.90:8441","kubernetes.io/config.hash":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.mirror":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.seen":"2024-04-29T19:02:02.743789732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8146 chars]
	I0429 19:05:07.348860    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:07.348860    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:07.348860    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:07.348860    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:07.351446    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:05:07.352063    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:07.352063    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:07.352063    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:07.352063    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:07.352063    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:07 GMT
	I0429 19:05:07.352063    4360 round_trippers.go:580]     Audit-Id: c7d5b2ef-86d7-4e3b-a6d5-c0b8e2fa4fc0
	I0429 19:05:07.352063    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:07.352290    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:07.843325    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:07.843436    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:07.843436    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:07.843436    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:07.851830    4360 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:05:07.852095    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:07.852095    4360 round_trippers.go:580]     Audit-Id: ec092ce2-2931-4436-a5cf-dbfc863d6338
	I0429 19:05:07.852095    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:07.852095    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:07.852095    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:07.852095    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:07.852095    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:07 GMT
	I0429 19:05:07.852854    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-980800","namespace":"kube-system","uid":"e6c4fa80-7b63-4e06-8813-594bd298a8dc","resourceVersion":"535","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.245.90:8441","kubernetes.io/config.hash":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.mirror":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.seen":"2024-04-29T19:02:02.743789732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8146 chars]
	I0429 19:05:07.852993    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:07.853581    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:07.853581    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:07.853581    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:07.855847    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:05:07.856513    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:07.856568    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:07.856568    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:07.856568    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:07.856568    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:07.856568    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:07 GMT
	I0429 19:05:07.856568    4360 round_trippers.go:580]     Audit-Id: c1828c58-c77b-4223-a078-1210ccf994a8
	I0429 19:05:07.856724    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:08.343180    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:08.343492    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:08.343492    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:08.343589    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:08.347337    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:08.348030    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:08.348030    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:08.348030    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:08.348030    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:08.348030    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:08 GMT
	I0429 19:05:08.348030    4360 round_trippers.go:580]     Audit-Id: 72e6635b-c789-43f4-b260-6816d203a582
	I0429 19:05:08.348030    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:08.348162    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-980800","namespace":"kube-system","uid":"e6c4fa80-7b63-4e06-8813-594bd298a8dc","resourceVersion":"600","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.245.90:8441","kubernetes.io/config.hash":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.mirror":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.seen":"2024-04-29T19:02:02.743789732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7902 chars]
	I0429 19:05:08.349141    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:08.349196    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:08.349196    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:08.349196    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:08.352784    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:08.352784    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:08.352784    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:08.352784    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:08 GMT
	I0429 19:05:08.352784    4360 round_trippers.go:580]     Audit-Id: 181e2ad3-b78a-4303-84c7-ec51958802cb
	I0429 19:05:08.352784    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:08.352784    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:08.352784    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:08.353397    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:08.353998    4360 pod_ready.go:92] pod "kube-apiserver-functional-980800" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:08.353998    4360 pod_ready.go:81] duration metric: took 4.0176738s for pod "kube-apiserver-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:08.353998    4360 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:08.354132    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-980800
	I0429 19:05:08.354231    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:08.354231    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:08.354231    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:08.358013    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:08.358013    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:08.358013    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:08 GMT
	I0429 19:05:08.358013    4360 round_trippers.go:580]     Audit-Id: f407e0fa-3fd5-4559-ae14-b0517f0939dd
	I0429 19:05:08.358013    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:08.358013    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:08.358013    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:08.358013    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:08.359017    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-980800","namespace":"kube-system","uid":"4b4efc39-d13c-4e21-8428-5e72f3ba655f","resourceVersion":"534","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f07dc7fac9d447c1fdf7ca0b5a6e82b9","kubernetes.io/config.mirror":"f07dc7fac9d447c1fdf7ca0b5a6e82b9","kubernetes.io/config.seen":"2024-04-29T19:02:02.743790932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0429 19:05:08.359017    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:08.359017    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:08.359017    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:08.359017    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:08.362259    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:08.362391    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:08.362391    4360 round_trippers.go:580]     Audit-Id: cb1f6a1b-2fce-429d-9fe1-b1fc34da4f4e
	I0429 19:05:08.362391    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:08.362391    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:08.362391    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:08.362391    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:08.362391    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:08 GMT
	I0429 19:05:08.362620    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:08.862888    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-980800
	I0429 19:05:08.862888    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:08.862888    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:08.862888    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:08.866928    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:08.867027    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:08.867027    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:08.867027    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:08.867111    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:08 GMT
	I0429 19:05:08.867111    4360 round_trippers.go:580]     Audit-Id: a99917c0-cd4c-4e44-aba4-2c21b3314d2b
	I0429 19:05:08.867111    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:08.867111    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:08.867510    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-980800","namespace":"kube-system","uid":"4b4efc39-d13c-4e21-8428-5e72f3ba655f","resourceVersion":"534","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f07dc7fac9d447c1fdf7ca0b5a6e82b9","kubernetes.io/config.mirror":"f07dc7fac9d447c1fdf7ca0b5a6e82b9","kubernetes.io/config.seen":"2024-04-29T19:02:02.743790932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0429 19:05:08.867810    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:08.867810    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:08.867810    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:08.867810    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:08.871189    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:08.875238    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:08.875282    4360 round_trippers.go:580]     Audit-Id: a39574cc-8dcd-484b-afbd-77e0676f7708
	I0429 19:05:08.875282    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:08.875282    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:08.875282    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:08.875282    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:08.875282    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:08 GMT
	I0429 19:05:08.875442    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:09.368121    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-980800
	I0429 19:05:09.368444    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.368444    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.368444    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.372203    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:09.372649    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.372649    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.372649    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.372649    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.372649    4360 round_trippers.go:580]     Audit-Id: b7651994-179c-4335-aede-2e850b6cb656
	I0429 19:05:09.372649    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.372649    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.372905    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-980800","namespace":"kube-system","uid":"4b4efc39-d13c-4e21-8428-5e72f3ba655f","resourceVersion":"602","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f07dc7fac9d447c1fdf7ca0b5a6e82b9","kubernetes.io/config.mirror":"f07dc7fac9d447c1fdf7ca0b5a6e82b9","kubernetes.io/config.seen":"2024-04-29T19:02:02.743790932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0429 19:05:09.373694    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:09.374343    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.374343    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.374343    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.377131    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:05:09.377131    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.377131    4360 round_trippers.go:580]     Audit-Id: fd5448e6-f336-46b1-95f1-499b2c5d170c
	I0429 19:05:09.377131    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.377131    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.377131    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.377131    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.377131    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.377131    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:09.378013    4360 pod_ready.go:92] pod "kube-controller-manager-functional-980800" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:09.378013    4360 pod_ready.go:81] duration metric: took 1.0240083s for pod "kube-controller-manager-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:09.378013    4360 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-794mc" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:09.378013    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-proxy-794mc
	I0429 19:05:09.378013    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.378013    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.378013    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.392693    4360 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 19:05:09.392693    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.392693    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.392693    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.392693    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.392693    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.392693    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.392693    4360 round_trippers.go:580]     Audit-Id: b99abe94-c4f7-4552-b4a1-c16cf613e5f9
	I0429 19:05:09.392693    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-794mc","generateName":"kube-proxy-","namespace":"kube-system","uid":"da9d80f8-9325-46df-813b-1e3801cf3e88","resourceVersion":"544","creationTimestamp":"2024-04-29T19:02:26Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4da3dab9-5932-420b-a9a9-d1226b53aeb2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4da3dab9-5932-420b-a9a9-d1226b53aeb2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0429 19:05:09.393915    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:09.393915    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.393970    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.393970    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.399218    4360 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:05:09.399218    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.399305    4360 round_trippers.go:580]     Audit-Id: b9cf7fe6-641a-4c85-86a9-79e30204fe4d
	I0429 19:05:09.399305    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.399305    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.399305    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.399305    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.399305    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.399685    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:09.400145    4360 pod_ready.go:92] pod "kube-proxy-794mc" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:09.400145    4360 pod_ready.go:81] duration metric: took 22.1317ms for pod "kube-proxy-794mc" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:09.400145    4360 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:09.400248    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-980800
	I0429 19:05:09.400248    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.400248    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.400248    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.403768    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:09.403768    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.403768    4360 round_trippers.go:580]     Audit-Id: dc0cf244-f4b1-49e9-af50-1b0415b49386
	I0429 19:05:09.403768    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.403768    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.403768    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.403768    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.403768    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.403768    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-980800","namespace":"kube-system","uid":"ee11cc90-27fe-40dc-be40-86478d68cfc6","resourceVersion":"595","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f53fce625206738b380a9ee824b9188c","kubernetes.io/config.mirror":"f53fce625206738b380a9ee824b9188c","kubernetes.io/config.seen":"2024-04-29T19:02:11.766066506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5202 chars]
	I0429 19:05:09.404850    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:09.404886    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.404886    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.404974    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.410224    4360 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:05:09.410224    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.410224    4360 round_trippers.go:580]     Audit-Id: b094b962-c8ad-4240-ab21-e319728a96d9
	I0429 19:05:09.410414    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.410471    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.410471    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.410471    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.410471    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.410733    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:09.411702    4360 pod_ready.go:92] pod "kube-scheduler-functional-980800" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:09.411762    4360 pod_ready.go:81] duration metric: took 11.617ms for pod "kube-scheduler-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:09.411762    4360 pod_ready.go:38] duration metric: took 10.607897s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:05:09.411762    4360 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 19:05:09.433981    4360 command_runner.go:130] > -16
	I0429 19:05:09.434214    4360 ops.go:34] apiserver oom_adj: -16
	I0429 19:05:09.434214    4360 kubeadm.go:591] duration metric: took 23.5047627s to restartPrimaryControlPlane
	I0429 19:05:09.434365    4360 kubeadm.go:393] duration metric: took 23.6739997s to StartCluster
	I0429 19:05:09.434695    4360 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:05:09.434695    4360 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:05:09.436618    4360 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:05:09.438343    4360 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.245.90 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:05:09.438343    4360 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 19:05:09.442009    4360 out.go:177] * Verifying Kubernetes components...
	I0429 19:05:09.438343    4360 addons.go:69] Setting storage-provisioner=true in profile "functional-980800"
	I0429 19:05:09.438343    4360 addons.go:69] Setting default-storageclass=true in profile "functional-980800"
	I0429 19:05:09.438881    4360 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:05:09.444344    4360 addons.go:234] Setting addon storage-provisioner=true in "functional-980800"
	W0429 19:05:09.444344    4360 addons.go:243] addon storage-provisioner should already be in state true
	I0429 19:05:09.444344    4360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-980800"
	I0429 19:05:09.444344    4360 host.go:66] Checking if "functional-980800" exists ...
	I0429 19:05:09.445326    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:05:09.445735    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:05:09.461080    4360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:05:09.809035    4360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:05:09.842081    4360 node_ready.go:35] waiting up to 6m0s for node "functional-980800" to be "Ready" ...
	I0429 19:05:09.843268    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:09.843268    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.843268    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.843268    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.851721    4360 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:05:09.852369    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.852369    4360 round_trippers.go:580]     Audit-Id: 647442bf-48a0-455c-acfc-764c2b24cf05
	I0429 19:05:09.852464    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.852464    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.852464    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.852464    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.852464    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.853440    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:09.854121    4360 node_ready.go:49] node "functional-980800" has status "Ready":"True"
	I0429 19:05:09.854121    4360 node_ready.go:38] duration metric: took 12.0403ms for node "functional-980800" to be "Ready" ...
	I0429 19:05:09.854121    4360 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:05:09.854121    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods
	I0429 19:05:09.854121    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.854121    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.854121    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.861396    4360 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:05:09.861619    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.861619    4360 round_trippers.go:580]     Audit-Id: 78bc34c9-eaa1-41f2-bfdc-a73b08883b38
	I0429 19:05:09.861619    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.861619    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.861619    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.861619    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.861719    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.864008    4360 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"606"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-cqkc4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3","resourceVersion":"592","creationTimestamp":"2024-04-29T19:02:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d14cd212-4afb-4fd7-861a-cf7df764c17f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d14cd212-4afb-4fd7-861a-cf7df764c17f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50318 chars]
	I0429 19:05:09.867745    4360 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cqkc4" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:09.868003    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cqkc4
	I0429 19:05:09.868074    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.868074    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.868074    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.871489    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:09.871489    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.871489    4360 round_trippers.go:580]     Audit-Id: c0fa53d3-e349-4062-853c-24441c972841
	I0429 19:05:09.871489    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.871489    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.871489    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.872077    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.872077    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.872317    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-cqkc4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3","resourceVersion":"592","creationTimestamp":"2024-04-29T19:02:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d14cd212-4afb-4fd7-861a-cf7df764c17f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d14cd212-4afb-4fd7-861a-cf7df764c17f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0429 19:05:09.873517    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:09.873601    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.873601    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.873664    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.879931    4360 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:05:09.879931    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.879931    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.879931    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.879931    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.879931    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.879931    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.879931    4360 round_trippers.go:580]     Audit-Id: de0ace4e-582a-4af8-8c69-bc0c7f8bd8b4
	I0429 19:05:09.879931    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:09.880909    4360 pod_ready.go:92] pod "coredns-7db6d8ff4d-cqkc4" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:09.880909    4360 pod_ready.go:81] duration metric: took 13.0625ms for pod "coredns-7db6d8ff4d-cqkc4" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:09.880909    4360 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:09.880909    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/etcd-functional-980800
	I0429 19:05:09.880909    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.881912    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.881912    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.884920    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:09.884920    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.884920    4360 round_trippers.go:580]     Audit-Id: e6f33143-bdb7-4a5f-baaa-3493b210062a
	I0429 19:05:09.884920    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.884920    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.884920    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.884920    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.884920    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.885915    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-980800","namespace":"kube-system","uid":"fc2416af-4d87-4476-8c96-d70e6320dac4","resourceVersion":"593","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.245.90:2379","kubernetes.io/config.hash":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.mirror":"b414d76cd5d94e6ec031245907fe5885","kubernetes.io/config.seen":"2024-04-29T19:02:11.766082607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6369 chars]
	I0429 19:05:09.947664    4360 request.go:629] Waited for 60.6053ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:09.947932    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:09.947932    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:09.947932    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:09.947932    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:09.953315    4360 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:05:09.953315    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:09.953315    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:09.953315    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:09 GMT
	I0429 19:05:09.953315    4360 round_trippers.go:580]     Audit-Id: 51f4d952-7376-4208-9e65-b179ea7913c4
	I0429 19:05:09.953315    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:09.953315    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:09.953315    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:09.954255    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:09.954358    4360 pod_ready.go:92] pod "etcd-functional-980800" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:09.954892    4360 pod_ready.go:81] duration metric: took 73.9822ms for pod "etcd-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:09.954980    4360 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:10.153021    4360 request.go:629] Waited for 197.9007ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:10.153397    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-980800
	I0429 19:05:10.153467    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:10.153467    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:10.153467    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:10.156904    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:10.156904    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:10.156904    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:10.156904    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:10.156904    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:10.156904    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:10.156904    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:10 GMT
	I0429 19:05:10.156904    4360 round_trippers.go:580]     Audit-Id: ff130b9f-fe2f-4f9b-b2ae-4b1a67a34d24
	I0429 19:05:10.157551    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-980800","namespace":"kube-system","uid":"e6c4fa80-7b63-4e06-8813-594bd298a8dc","resourceVersion":"600","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.245.90:8441","kubernetes.io/config.hash":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.mirror":"98b1a8f4836eeba82cb283bd15d8f15a","kubernetes.io/config.seen":"2024-04-29T19:02:02.743789732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7902 chars]
	I0429 19:05:10.358796    4360 request.go:629] Waited for 200.3934ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:10.359304    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:10.359304    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:10.359304    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:10.359304    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:10.364275    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:10.364341    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:10.364341    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:10.364341    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:10.364341    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:10.364341    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:10.364341    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:10 GMT
	I0429 19:05:10.364341    4360 round_trippers.go:580]     Audit-Id: 7c232ea2-7171-46e2-8026-94726af2a076
	I0429 19:05:10.364870    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:10.365316    4360 pod_ready.go:92] pod "kube-apiserver-functional-980800" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:10.365316    4360 pod_ready.go:81] duration metric: took 410.3334ms for pod "kube-apiserver-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:10.365456    4360 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:10.551542    4360 request.go:629] Waited for 185.7595ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-980800
	I0429 19:05:10.551639    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-980800
	I0429 19:05:10.551639    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:10.551639    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:10.551639    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:10.554983    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:10.554983    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:10.554983    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:10.554983    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:10.554983    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:10.554983    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:10 GMT
	I0429 19:05:10.554983    4360 round_trippers.go:580]     Audit-Id: 93a12078-91fe-463e-b029-3e6f21507a9f
	I0429 19:05:10.554983    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:10.556343    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-980800","namespace":"kube-system","uid":"4b4efc39-d13c-4e21-8428-5e72f3ba655f","resourceVersion":"602","creationTimestamp":"2024-04-29T19:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f07dc7fac9d447c1fdf7ca0b5a6e82b9","kubernetes.io/config.mirror":"f07dc7fac9d447c1fdf7ca0b5a6e82b9","kubernetes.io/config.seen":"2024-04-29T19:02:02.743790932Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0429 19:05:10.757905    4360 request.go:629] Waited for 200.8194ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:10.758172    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:10.758172    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:10.758172    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:10.758172    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:10.761690    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:10.761690    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:10.761690    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:10.762381    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:10.762381    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:10.762381    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:10.762381    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:10 GMT
	I0429 19:05:10.762381    4360 round_trippers.go:580]     Audit-Id: 034073c3-c8e3-4587-afdf-d4e40110d32e
	I0429 19:05:10.763734    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:10.764345    4360 pod_ready.go:92] pod "kube-controller-manager-functional-980800" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:10.764543    4360 pod_ready.go:81] duration metric: took 399.0838ms for pod "kube-controller-manager-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:10.764592    4360 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-794mc" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:10.946923    4360 request.go:629] Waited for 182.1561ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-proxy-794mc
	I0429 19:05:10.947187    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-proxy-794mc
	I0429 19:05:10.947286    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:10.947331    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:10.947331    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:10.950869    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:10.951631    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:10.951631    4360 round_trippers.go:580]     Audit-Id: f8f326cb-ee64-4af3-b889-0e3d4700913b
	I0429 19:05:10.951631    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:10.951631    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:10.951631    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:10.951631    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:10.951631    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:10 GMT
	I0429 19:05:10.952016    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-794mc","generateName":"kube-proxy-","namespace":"kube-system","uid":"da9d80f8-9325-46df-813b-1e3801cf3e88","resourceVersion":"544","creationTimestamp":"2024-04-29T19:02:26Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4da3dab9-5932-420b-a9a9-d1226b53aeb2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4da3dab9-5932-420b-a9a9-d1226b53aeb2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0429 19:05:11.150463    4360 request.go:629] Waited for 197.6942ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:11.150463    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:11.150463    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:11.150622    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:11.150622    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:11.155115    4360 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:05:11.155379    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:11.155379    4360 round_trippers.go:580]     Audit-Id: 2e69c68c-71f1-4bac-9ce2-2cbb5444a09f
	I0429 19:05:11.155379    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:11.155379    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:11.155379    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:11.155379    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:11.155379    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:11 GMT
	I0429 19:05:11.155892    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:11.156762    4360 pod_ready.go:92] pod "kube-proxy-794mc" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:11.156864    4360 pod_ready.go:81] duration metric: took 392.2691ms for pod "kube-proxy-794mc" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:11.156864    4360 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:11.354860    4360 request.go:629] Waited for 197.8088ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-980800
	I0429 19:05:11.355091    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-980800
	I0429 19:05:11.355373    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:11.355373    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:11.355373    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:11.361660    4360 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:05:11.361660    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:11.361660    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:11.361660    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:11.361660    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:11.361660    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:11.361660    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:11 GMT
	I0429 19:05:11.361660    4360 round_trippers.go:580]     Audit-Id: c0ac8d56-8fce-4c32-afac-f25015a5d010
	I0429 19:05:11.362077    4360 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-980800","namespace":"kube-system","uid":"ee11cc90-27fe-40dc-be40-86478d68cfc6","resourceVersion":"595","creationTimestamp":"2024-04-29T19:02:12Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f53fce625206738b380a9ee824b9188c","kubernetes.io/config.mirror":"f53fce625206738b380a9ee824b9188c","kubernetes.io/config.seen":"2024-04-29T19:02:11.766066506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5202 chars]
	I0429 19:05:11.543422    4360 request.go:629] Waited for 180.7675ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:11.543887    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes/functional-980800
	I0429 19:05:11.543887    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:11.543887    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:11.543887    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:11.546779    4360 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:05:11.546779    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:11.546779    4360 round_trippers.go:580]     Audit-Id: f209761f-4889-421d-8764-a1acb6b39023
	I0429 19:05:11.546779    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:11.547528    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:11.547528    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:11.547528    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:11.547528    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:11 GMT
	I0429 19:05:11.547905    4360 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-29T19:02:07Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0429 19:05:11.548392    4360 pod_ready.go:92] pod "kube-scheduler-functional-980800" in "kube-system" namespace has status "Ready":"True"
	I0429 19:05:11.548518    4360 pod_ready.go:81] duration metric: took 391.6515ms for pod "kube-scheduler-functional-980800" in "kube-system" namespace to be "Ready" ...
	I0429 19:05:11.548518    4360 pod_ready.go:38] duration metric: took 1.6943851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:05:11.548518    4360 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:05:11.562478    4360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:05:11.604727    4360 command_runner.go:130] > 6054
	I0429 19:05:11.604812    4360 api_server.go:72] duration metric: took 2.1664534s to wait for apiserver process to appear ...
	I0429 19:05:11.604884    4360 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:05:11.604954    4360 api_server.go:253] Checking apiserver healthz at https://172.17.245.90:8441/healthz ...
	I0429 19:05:11.613473    4360 api_server.go:279] https://172.17.245.90:8441/healthz returned 200:
	ok
	I0429 19:05:11.614526    4360 round_trippers.go:463] GET https://172.17.245.90:8441/version
	I0429 19:05:11.614575    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:11.614575    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:11.614575    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:11.615896    4360 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 19:05:11.615896    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:11.615896    4360 round_trippers.go:580]     Audit-Id: 5f4a8f29-40cc-42a2-a8a6-9fea4b723e62
	I0429 19:05:11.615896    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:11.615896    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:11.615896    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:11.616327    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:11.616327    4360 round_trippers.go:580]     Content-Length: 263
	I0429 19:05:11.616327    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:11 GMT
	I0429 19:05:11.616327    4360 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 19:05:11.616327    4360 api_server.go:141] control plane version: v1.30.0
	I0429 19:05:11.616327    4360 api_server.go:131] duration metric: took 11.4432ms to wait for apiserver health ...
	I0429 19:05:11.616327    4360 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:05:11.658551    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:05:11.658551    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:05:11.659971    4360 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:05:11.660630    4360 kapi.go:59] client config for functional-980800: &rest.Config{Host:"https://172.17.245.90:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-980800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-980800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 19:05:11.661529    4360 addons.go:234] Setting addon default-storageclass=true in "functional-980800"
	W0429 19:05:11.661615    4360 addons.go:243] addon default-storageclass should already be in state true
	I0429 19:05:11.661717    4360 host.go:66] Checking if "functional-980800" exists ...
	I0429 19:05:11.662899    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:05:11.695800    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:05:11.695800    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:05:11.702547    4360 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:05:11.704575    4360 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 19:05:11.704575    4360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 19:05:11.704575    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:05:11.744951    4360 request.go:629] Waited for 128.3248ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods
	I0429 19:05:11.745163    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods
	I0429 19:05:11.745163    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:11.745163    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:11.745163    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:11.753899    4360 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:05:11.753899    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:11.753899    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:11.753899    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:11.753899    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:11.753899    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:11.753899    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:11 GMT
	I0429 19:05:11.753899    4360 round_trippers.go:580]     Audit-Id: efa9bef7-ad14-4280-86cd-b7d0fe1cbcc4
	I0429 19:05:11.755708    4360 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"606"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-cqkc4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3","resourceVersion":"592","creationTimestamp":"2024-04-29T19:02:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d14cd212-4afb-4fd7-861a-cf7df764c17f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d14cd212-4afb-4fd7-861a-cf7df764c17f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50318 chars]
	I0429 19:05:11.759173    4360 system_pods.go:59] 7 kube-system pods found
	I0429 19:05:11.759173    4360 system_pods.go:61] "coredns-7db6d8ff4d-cqkc4" [41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3] Running
	I0429 19:05:11.759173    4360 system_pods.go:61] "etcd-functional-980800" [fc2416af-4d87-4476-8c96-d70e6320dac4] Running
	I0429 19:05:11.759173    4360 system_pods.go:61] "kube-apiserver-functional-980800" [e6c4fa80-7b63-4e06-8813-594bd298a8dc] Running
	I0429 19:05:11.759173    4360 system_pods.go:61] "kube-controller-manager-functional-980800" [4b4efc39-d13c-4e21-8428-5e72f3ba655f] Running
	I0429 19:05:11.759173    4360 system_pods.go:61] "kube-proxy-794mc" [da9d80f8-9325-46df-813b-1e3801cf3e88] Running
	I0429 19:05:11.759173    4360 system_pods.go:61] "kube-scheduler-functional-980800" [ee11cc90-27fe-40dc-be40-86478d68cfc6] Running
	I0429 19:05:11.759173    4360 system_pods.go:61] "storage-provisioner" [cb1b2baa-391c-407a-a97d-23d3d0d29f13] Running
	I0429 19:05:11.759173    4360 system_pods.go:74] duration metric: took 142.8447ms to wait for pod list to return data ...
	I0429 19:05:11.759173    4360 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:05:11.950224    4360 request.go:629] Waited for 191.0502ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/namespaces/default/serviceaccounts
	I0429 19:05:11.950521    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/default/serviceaccounts
	I0429 19:05:11.950521    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:11.950521    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:11.950521    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:11.958095    4360 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:05:11.958095    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:11.958095    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:11.958095    4360 round_trippers.go:580]     Content-Length: 261
	I0429 19:05:11.958095    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:11 GMT
	I0429 19:05:11.958095    4360 round_trippers.go:580]     Audit-Id: 167a0b2e-20fc-465d-8459-632ad0b69fa8
	I0429 19:05:11.958095    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:11.958095    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:11.958095    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:11.958543    4360 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"606"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3e0d3f2e-4cc3-4b1a-bfb9-0fe6660e5868","resourceVersion":"339","creationTimestamp":"2024-04-29T19:02:25Z"}}]}
	I0429 19:05:11.958896    4360 default_sa.go:45] found service account: "default"
	I0429 19:05:11.958939    4360 default_sa.go:55] duration metric: took 199.7647ms for default service account to be created ...
	I0429 19:05:11.958939    4360 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:05:12.155861    4360 request.go:629] Waited for 196.7587ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods
	I0429 19:05:12.156011    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/namespaces/kube-system/pods
	I0429 19:05:12.156203    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:12.156203    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:12.156203    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:12.163710    4360 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:05:12.164232    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:12.164232    4360 round_trippers.go:580]     Audit-Id: eea46a20-b122-4930-aee4-418ca9a5991e
	I0429 19:05:12.164232    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:12.164232    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:12.164232    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:12.164232    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:12.164232    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:12 GMT
	I0429 19:05:12.165616    4360 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"606"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-cqkc4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3","resourceVersion":"592","creationTimestamp":"2024-04-29T19:02:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d14cd212-4afb-4fd7-861a-cf7df764c17f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T19:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d14cd212-4afb-4fd7-861a-cf7df764c17f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50318 chars]
	I0429 19:05:12.169027    4360 system_pods.go:86] 7 kube-system pods found
	I0429 19:05:12.169157    4360 system_pods.go:89] "coredns-7db6d8ff4d-cqkc4" [41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3] Running
	I0429 19:05:12.169157    4360 system_pods.go:89] "etcd-functional-980800" [fc2416af-4d87-4476-8c96-d70e6320dac4] Running
	I0429 19:05:12.169157    4360 system_pods.go:89] "kube-apiserver-functional-980800" [e6c4fa80-7b63-4e06-8813-594bd298a8dc] Running
	I0429 19:05:12.169218    4360 system_pods.go:89] "kube-controller-manager-functional-980800" [4b4efc39-d13c-4e21-8428-5e72f3ba655f] Running
	I0429 19:05:12.169218    4360 system_pods.go:89] "kube-proxy-794mc" [da9d80f8-9325-46df-813b-1e3801cf3e88] Running
	I0429 19:05:12.169285    4360 system_pods.go:89] "kube-scheduler-functional-980800" [ee11cc90-27fe-40dc-be40-86478d68cfc6] Running
	I0429 19:05:12.169285    4360 system_pods.go:89] "storage-provisioner" [cb1b2baa-391c-407a-a97d-23d3d0d29f13] Running
	I0429 19:05:12.169285    4360 system_pods.go:126] duration metric: took 210.3449ms to wait for k8s-apps to be running ...
	I0429 19:05:12.169364    4360 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:05:12.182244    4360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:05:12.210316    4360 system_svc.go:56] duration metric: took 41.0305ms WaitForService to wait for kubelet
	I0429 19:05:12.210448    4360 kubeadm.go:576] duration metric: took 2.7720143s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:05:12.210448    4360 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:05:12.345209    4360 request.go:629] Waited for 134.514ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.245.90:8441/api/v1/nodes
	I0429 19:05:12.345494    4360 round_trippers.go:463] GET https://172.17.245.90:8441/api/v1/nodes
	I0429 19:05:12.345494    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:12.345494    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:12.345591    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:12.349107    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:12.349107    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:12.349107    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:12.349538    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:12 GMT
	I0429 19:05:12.349538    4360 round_trippers.go:580]     Audit-Id: dec28357-7dd4-4dc0-ad8d-a4f430925518
	I0429 19:05:12.349538    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:12.349538    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:12.349538    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:12.349871    4360 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"606"},"items":[{"metadata":{"name":"functional-980800","uid":"c7e288af-2d3e-4134-94de-6e0b73ce0d68","resourceVersion":"532","creationTimestamp":"2024-04-29T19:02:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-980800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"functional-980800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T19_02_12_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0429 19:05:12.350416    4360 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:05:12.350533    4360 node_conditions.go:123] node cpu capacity is 2
	I0429 19:05:12.350533    4360 node_conditions.go:105] duration metric: took 140.0833ms to run NodePressure ...
	I0429 19:05:12.350533    4360 start.go:240] waiting for startup goroutines ...
	I0429 19:05:13.950347    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:05:13.950347    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:05:13.950347    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:05:13.950347    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:05:13.950347    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:05:13.951490    4360 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 19:05:13.951581    4360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 19:05:13.951683    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
	I0429 19:05:16.201613    4360 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:05:16.201613    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:05:16.202532    4360 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
	I0429 19:05:16.627778    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:05:16.627778    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:05:16.628741    4360 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
	I0429 19:05:16.777682    4360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 19:05:17.697648    4360 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0429 19:05:17.697648    4360 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0429 19:05:17.697648    4360 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0429 19:05:17.697648    4360 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0429 19:05:17.697648    4360 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0429 19:05:17.697648    4360 command_runner.go:130] > pod/storage-provisioner configured
	I0429 19:05:18.823170    4360 main.go:141] libmachine: [stdout =====>] : 172.17.245.90
	
	I0429 19:05:18.823415    4360 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:05:18.823571    4360 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
	I0429 19:05:18.978569    4360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 19:05:19.176112    4360 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0429 19:05:19.176878    4360 round_trippers.go:463] GET https://172.17.245.90:8441/apis/storage.k8s.io/v1/storageclasses
	I0429 19:05:19.176969    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:19.176969    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:19.177073    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:19.180937    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:19.181470    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:19.181470    4360 round_trippers.go:580]     Audit-Id: 67c103b9-7361-454c-b387-4b3fcf6773cf
	I0429 19:05:19.181470    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:19.181470    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:19.181470    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:19.181470    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:19.181470    4360 round_trippers.go:580]     Content-Length: 1273
	I0429 19:05:19.181470    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:19 GMT
	I0429 19:05:19.181470    4360 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"613"},"items":[{"metadata":{"name":"standard","uid":"b191f65c-e24c-4aea-8b52-dc723b1cb6c6","resourceVersion":"429","creationTimestamp":"2024-04-29T19:02:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T19:02:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 19:05:19.182180    4360 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b191f65c-e24c-4aea-8b52-dc723b1cb6c6","resourceVersion":"429","creationTimestamp":"2024-04-29T19:02:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T19:02:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 19:05:19.182757    4360 round_trippers.go:463] PUT https://172.17.245.90:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 19:05:19.182757    4360 round_trippers.go:469] Request Headers:
	I0429 19:05:19.182757    4360 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:05:19.182757    4360 round_trippers.go:473]     Content-Type: application/json
	I0429 19:05:19.182852    4360 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:05:19.186850    4360 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:05:19.187970    4360 round_trippers.go:577] Response Headers:
	I0429 19:05:19.187970    4360 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d77d00d0-8fcf-4cd6-80b6-3031fcff79f2
	I0429 19:05:19.187970    4360 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d15d7c8-402f-4d0c-84fa-ef0c7c6b8bee
	I0429 19:05:19.187970    4360 round_trippers.go:580]     Content-Length: 1220
	I0429 19:05:19.187970    4360 round_trippers.go:580]     Date: Mon, 29 Apr 2024 19:05:19 GMT
	I0429 19:05:19.187970    4360 round_trippers.go:580]     Audit-Id: eede4c21-d93d-499e-9d34-aece6140de75
	I0429 19:05:19.187970    4360 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 19:05:19.187970    4360 round_trippers.go:580]     Content-Type: application/json
	I0429 19:05:19.187970    4360 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b191f65c-e24c-4aea-8b52-dc723b1cb6c6","resourceVersion":"429","creationTimestamp":"2024-04-29T19:02:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T19:02:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 19:05:19.191894    4360 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 19:05:19.194276    4360 addons.go:505] duration metric: took 9.7558634s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 19:05:19.194360    4360 start.go:245] waiting for cluster config update ...
	I0429 19:05:19.194446    4360 start.go:254] writing updated cluster config ...
	I0429 19:05:19.207913    4360 ssh_runner.go:195] Run: rm -f paused
	I0429 19:05:19.373648    4360 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 19:05:19.381641    4360 out.go:177] * Done! kubectl is now configured to use "functional-980800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.470170604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.470277959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.470312744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.471289528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.490758950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.491145485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.491187467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.491332706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.500989699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.501069465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.501089257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:52 functional-980800 dockerd[4038]: time="2024-04-29T19:04:52.501199510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:53 functional-980800 dockerd[4038]: time="2024-04-29T19:04:53.331470157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:04:53 functional-980800 dockerd[4038]: time="2024-04-29T19:04:53.331662078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:04:53 functional-980800 dockerd[4038]: time="2024-04-29T19:04:53.335882338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:53 functional-980800 dockerd[4038]: time="2024-04-29T19:04:53.336007587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:56 functional-980800 cri-dockerd[4342]: time="2024-04-29T19:04:56Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Apr 29 19:04:57 functional-980800 dockerd[4038]: time="2024-04-29T19:04:57.177920738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:04:57 functional-980800 dockerd[4038]: time="2024-04-29T19:04:57.178012705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:04:57 functional-980800 dockerd[4038]: time="2024-04-29T19:04:57.178029099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:57 functional-980800 dockerd[4038]: time="2024-04-29T19:04:57.178221429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:57 functional-980800 dockerd[4038]: time="2024-04-29T19:04:57.202587654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:04:57 functional-980800 dockerd[4038]: time="2024-04-29T19:04:57.202683419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:04:57 functional-980800 dockerd[4038]: time="2024-04-29T19:04:57.202704511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:04:57 functional-980800 dockerd[4038]: time="2024-04-29T19:04:57.203044788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	be29a96d5ae3f       a0bf559e280cf       2 minutes ago       Running             kube-proxy                2                   210d322d6d350       kube-proxy-794mc
	8d4758a75129b       6e38f40d628db       2 minutes ago       Running             storage-provisioner       2                   446abb7eee8f4       storage-provisioner
	79e3d0c2978ef       c42f13656d0b2       2 minutes ago       Running             kube-apiserver            2                   343c307245ade       kube-apiserver-functional-980800
	8ed3a13bf3050       c7aad43836fa5       2 minutes ago       Running             kube-controller-manager   2                   c0e34c05a6776       kube-controller-manager-functional-980800
	a9706a645e7ff       259c8277fcbbc       2 minutes ago       Running             kube-scheduler            2                   6c4510dc6d9b8       kube-scheduler-functional-980800
	c3c9dd2956f95       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   b2fb26bce7658       etcd-functional-980800
	ae3c0ee6653fc       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   1c4e3b34cb9b0       coredns-7db6d8ff4d-cqkc4
	7707b7f5ceef0       6e38f40d628db       2 minutes ago       Created             storage-provisioner       1                   dd83cbf52390a       storage-provisioner
	88f0fd37692e8       259c8277fcbbc       2 minutes ago       Exited              kube-scheduler            1                   87df54d365f11       kube-scheduler-functional-980800
	dc1b917dab241       c7aad43836fa5       2 minutes ago       Exited              kube-controller-manager   1                   4468bf580a3b1       kube-controller-manager-functional-980800
	ccb91f2068b79       3861cfcd7c04c       2 minutes ago       Exited              etcd                      1                   5bb24a35611d8       etcd-functional-980800
	756458d8706d3       c42f13656d0b2       2 minutes ago       Exited              kube-apiserver            1                   4884ca15aeb0f       kube-apiserver-functional-980800
	acc7c4a166a5a       a0bf559e280cf       2 minutes ago       Exited              kube-proxy                1                   9a0d6837fcbef       kube-proxy-794mc
	865935eb57b32       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   fd96ca31e28ca       coredns-7db6d8ff4d-cqkc4
	
	
	==> coredns [865935eb57b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 658b75f59357881579d818bea4574a099ffd8bf4e34cb2d6414c381890635887b0895574e607ab48d69c0bc2657640404a00a48de79c5b96ce27f6a68e70a912
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55961 - 39769 "HINFO IN 3168363888327024464.4250454178468479124. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048587682s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1591126063]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 19:02:28.124) (total time: 30002ms):
	Trace[1591126063]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:02:58.126)
	Trace[1591126063]: [30.002316007s] [30.002316007s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1456766681]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 19:02:28.125) (total time: 30001ms):
	Trace[1456766681]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:02:58.126)
	Trace[1456766681]: [30.001528212s] [30.001528212s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1663921986]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 19:02:28.126) (total time: 30001ms):
	Trace[1663921986]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:02:58.127)
	Trace[1663921986]: [30.001783913s] [30.001783913s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ae3c0ee6653f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:57694->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.3:57694->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 658b75f59357881579d818bea4574a099ffd8bf4e34cb2d6414c381890635887b0895574e607ab48d69c0bc2657640404a00a48de79c5b96ce27f6a68e70a912
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43962 - 46870 "HINFO IN 4913215117123443374.1475513430916589533. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034580045s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               functional-980800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-980800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=functional-980800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T19_02_12_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:02:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-980800
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:06:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:06:58 +0000   Mon, 29 Apr 2024 19:02:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:06:58 +0000   Mon, 29 Apr 2024 19:02:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:06:58 +0000   Mon, 29 Apr 2024 19:02:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:06:58 +0000   Mon, 29 Apr 2024 19:02:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.245.90
	  Hostname:    functional-980800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912864Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912864Ki
	  pods:               110
	System Info:
	  Machine ID:                 d01591231f6f41b99caba2ee63a75311
	  System UUID:                256e8a8d-490e-8c4a-a5e7-29902ce963e7
	  Boot ID:                    e5e7f116-55d5-4ed7-a946-0d48e672d124
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-cqkc4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m41s
	  kube-system                 etcd-functional-980800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-apiserver-functional-980800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-functional-980800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-794mc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-scheduler-functional-980800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m39s                  kube-proxy       
	  Normal  Starting                 2m10s                  kube-proxy       
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)    kubelet          Node functional-980800 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)    kubelet          Node functional-980800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)    kubelet          Node functional-980800 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  5m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m56s                  kubelet          Node functional-980800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s                  kubelet          Node functional-980800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s                  kubelet          Node functional-980800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m53s                  kubelet          Node functional-980800 status is now: NodeReady
	  Normal  RegisteredNode           4m42s                  node-controller  Node functional-980800 event: Registered Node functional-980800 in Controller
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node functional-980800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node functional-980800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x7 over 2m16s)  kubelet          Node functional-980800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           118s                   node-controller  Node functional-980800 event: Registered Node functional-980800 in Controller
	
	
	==> dmesg <==
	[  +5.526978] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.757970] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[Apr29 19:02] systemd-fstab-generator[1733]: Ignoring "noauto" option for root device
	[  +0.110124] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.544067] systemd-fstab-generator[2137]: Ignoring "noauto" option for root device
	[  +0.141788] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.358151] systemd-fstab-generator[2375]: Ignoring "noauto" option for root device
	[  +0.187188] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.201762] kauditd_printk_skb: 69 callbacks suppressed
	[Apr29 19:04] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.742258] systemd-fstab-generator[3590]: Ignoring "noauto" option for root device
	[  +0.295100] systemd-fstab-generator[3602]: Ignoring "noauto" option for root device
	[  +0.330716] systemd-fstab-generator[3616]: Ignoring "noauto" option for root device
	[  +5.290109] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.138241] systemd-fstab-generator[4221]: Ignoring "noauto" option for root device
	[  +0.251755] systemd-fstab-generator[4233]: Ignoring "noauto" option for root device
	[  +0.243607] systemd-fstab-generator[4245]: Ignoring "noauto" option for root device
	[  +0.341589] systemd-fstab-generator[4274]: Ignoring "noauto" option for root device
	[  +0.993214] systemd-fstab-generator[4484]: Ignoring "noauto" option for root device
	[  +0.702380] kauditd_printk_skb: 142 callbacks suppressed
	[  +6.651200] systemd-fstab-generator[5829]: Ignoring "noauto" option for root device
	[  +0.162936] kauditd_printk_skb: 103 callbacks suppressed
	[  +5.798341] kauditd_printk_skb: 32 callbacks suppressed
	[Apr29 19:05] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.223790] systemd-fstab-generator[6303]: Ignoring "noauto" option for root device
	
	
	==> etcd [c3c9dd2956f9] <==
	{"level":"info","ts":"2024-04-29T19:04:52.854008Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:04:52.854035Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:04:52.854339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"68e4e85debe1395d switched to configuration voters=(7558421564721543517)"}
	{"level":"info","ts":"2024-04-29T19:04:52.854427Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9eaae5692003aa54","local-member-id":"68e4e85debe1395d","added-peer-id":"68e4e85debe1395d","added-peer-peer-urls":["https://172.17.245.90:2380"]}
	{"level":"info","ts":"2024-04-29T19:04:52.85451Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9eaae5692003aa54","local-member-id":"68e4e85debe1395d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:04:52.854544Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:04:52.885604Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:04:52.885915Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.245.90:2380"}
	{"level":"info","ts":"2024-04-29T19:04:52.895324Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.245.90:2380"}
	{"level":"info","ts":"2024-04-29T19:04:52.895998Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"68e4e85debe1395d","initial-advertise-peer-urls":["https://172.17.245.90:2380"],"listen-peer-urls":["https://172.17.245.90:2380"],"advertise-client-urls":["https://172.17.245.90:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.245.90:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:04:52.896061Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:04:54.408862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"68e4e85debe1395d is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T19:04:54.408944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"68e4e85debe1395d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:04:54.409009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"68e4e85debe1395d received MsgPreVoteResp from 68e4e85debe1395d at term 2"}
	{"level":"info","ts":"2024-04-29T19:04:54.409123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"68e4e85debe1395d became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T19:04:54.40914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"68e4e85debe1395d received MsgVoteResp from 68e4e85debe1395d at term 3"}
	{"level":"info","ts":"2024-04-29T19:04:54.409156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"68e4e85debe1395d became leader at term 3"}
	{"level":"info","ts":"2024-04-29T19:04:54.409183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 68e4e85debe1395d elected leader 68e4e85debe1395d at term 3"}
	{"level":"info","ts":"2024-04-29T19:04:54.423127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:04:54.430874Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.245.90:2379"}
	{"level":"info","ts":"2024-04-29T19:04:54.431187Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:04:54.437648Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:04:54.423085Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"68e4e85debe1395d","local-member-attributes":"{Name:functional-980800 ClientURLs:[https://172.17.245.90:2379]}","request-path":"/0/members/68e4e85debe1395d/attributes","cluster-id":"9eaae5692003aa54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:04:54.442254Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:04:54.442349Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [ccb91f2068b7] <==
	{"level":"warn","ts":"2024-04-29T19:04:46.692052Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-04-29T19:04:46.692152Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.245.90:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.245.90:2380","--initial-cluster=functional-980800=https://172.17.245.90:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.245.90:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.245.90:2380","--name=functional-980800","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","-
-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-04-29T19:04:46.692296Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-04-29T19:04:46.69235Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-04-29T19:04:46.692394Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.245.90:2380"]}
	{"level":"info","ts":"2024-04-29T19:04:46.692441Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:04:46.70865Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.245.90:2379"]}
	{"level":"info","ts":"2024-04-29T19:04:46.720598Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-980800","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.245.90:2380"],"listen-peer-urls":["https://172.17.245.90:2380"],"advertise-client-urls":["https://172.17.245.90:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.245.90:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-c
luster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-04-29T19:04:46.768892Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"41.193425ms"}
	{"level":"info","ts":"2024-04-29T19:04:46.833167Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-29T19:04:46.902483Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9eaae5692003aa54","local-member-id":"68e4e85debe1395d","commit-index":570}
	{"level":"info","ts":"2024-04-29T19:04:46.902672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"68e4e85debe1395d switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-29T19:04:46.90292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"68e4e85debe1395d became follower at term 2"}
	{"level":"info","ts":"2024-04-29T19:04:46.902939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 68e4e85debe1395d [peers: [], term: 2, commit: 570, applied: 0, lastindex: 570, lastterm: 2]"}
	
	
	==> kernel <==
	 19:07:07 up 7 min,  0 users,  load average: 0.62, 0.74, 0.37
	Linux functional-980800 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [756458d8706d] <==
	I0429 19:04:46.025068       1 options.go:221] external host was not specified, using 172.17.245.90
	I0429 19:04:46.029196       1 server.go:148] Version: v1.30.0
	I0429 19:04:46.029239       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:04:47.162349       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0429 19:04:47.162880       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:04:47.162998       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0429 19:04:47.167247       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 19:04:47.183350       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0429 19:04:47.183374       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0429 19:04:47.183622       1 instance.go:299] Using reconciler: lease
	W0429 19:04:47.184810       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:04:48.163893       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:04:48.163907       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:04:48.185389       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [79e3d0c2978e] <==
	I0429 19:04:56.472142       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 19:04:56.472199       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 19:04:56.488716       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 19:04:56.488990       1 policy_source.go:224] refreshing policies
	I0429 19:04:56.527986       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 19:04:56.528181       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 19:04:56.535377       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 19:04:56.538149       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 19:04:56.538879       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 19:04:56.538893       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 19:04:56.539867       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 19:04:56.539931       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:04:56.539939       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:04:56.539968       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:04:56.539975       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:04:56.560956       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:04:57.356756       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 19:04:57.816360       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.245.90]
	I0429 19:04:57.819353       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 19:04:58.626469       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 19:04:58.647290       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 19:04:58.708117       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 19:04:58.775285       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:04:58.788185       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:05:09.307347       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8ed3a13bf305] <==
	I0429 19:05:09.098394       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0429 19:05:09.100788       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 19:05:09.101151       1 shared_informer.go:320] Caches are synced for expand
	I0429 19:05:09.102763       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0429 19:05:09.104228       1 shared_informer.go:320] Caches are synced for ephemeral
	I0429 19:05:09.104708       1 shared_informer.go:320] Caches are synced for GC
	I0429 19:05:09.106178       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0429 19:05:09.107565       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 19:05:09.107672       1 shared_informer.go:320] Caches are synced for service account
	I0429 19:05:09.110876       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 19:05:09.113296       1 shared_informer.go:320] Caches are synced for namespace
	I0429 19:05:09.114761       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0429 19:05:09.121747       1 shared_informer.go:320] Caches are synced for job
	I0429 19:05:09.129260       1 shared_informer.go:320] Caches are synced for TTL
	I0429 19:05:09.131786       1 shared_informer.go:320] Caches are synced for crt configmap
	I0429 19:05:09.154765       1 shared_informer.go:320] Caches are synced for PVC protection
	I0429 19:05:09.208128       1 shared_informer.go:320] Caches are synced for endpoint
	I0429 19:05:09.215296       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 19:05:09.220580       1 shared_informer.go:320] Caches are synced for disruption
	I0429 19:05:09.232012       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 19:05:09.275027       1 shared_informer.go:320] Caches are synced for HPA
	I0429 19:05:09.283180       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0429 19:05:09.749635       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:05:09.749865       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 19:05:09.755921       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [dc1b917dab24] <==
	
	
	==> kube-proxy [acc7c4a166a5] <==
	I0429 19:04:45.869447       1 server_linux.go:69] "Using iptables proxy"
	E0429 19:04:45.875088       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-980800\": dial tcp 172.17.245.90:8441: connect: connection refused"
	
	
	==> kube-proxy [be29a96d5ae3] <==
	I0429 19:04:57.429582       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:04:57.443778       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.245.90"]
	I0429 19:04:57.496622       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:04:57.496691       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:04:57.496710       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:04:57.503459       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:04:57.503875       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:04:57.504337       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:04:57.506249       1 config.go:192] "Starting service config controller"
	I0429 19:04:57.506453       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:04:57.506652       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:04:57.506861       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:04:57.507537       1 config.go:319] "Starting node config controller"
	I0429 19:04:57.507716       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:04:57.607347       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:04:57.607771       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:04:57.607904       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [88f0fd37692e] <==
	
	
	==> kube-scheduler [a9706a645e7f] <==
	I0429 19:04:53.289643       1 serving.go:380] Generated self-signed cert in-memory
	W0429 19:04:56.407742       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 19:04:56.408003       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 19:04:56.408113       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 19:04:56.408222       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 19:04:56.490549       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 19:04:56.490704       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:04:56.494540       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 19:04:56.495139       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 19:04:56.495166       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 19:04:56.497715       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 19:04:56.595888       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 19:04:56 functional-980800 kubelet[5836]: E0429 19:04:56.537342    5836 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-functional-980800\" already exists" pod="kube-system/kube-scheduler-functional-980800"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.588517    5836 kubelet_node_status.go:112] "Node was previously registered" node="functional-980800"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.588714    5836 kubelet_node_status.go:76] "Successfully registered node" node="functional-980800"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.590919    5836 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.593181    5836 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.640299    5836 apiserver.go:52] "Watching apiserver"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.646007    5836 topology_manager.go:215] "Topology Admit Handler" podUID="da9d80f8-9325-46df-813b-1e3801cf3e88" podNamespace="kube-system" podName="kube-proxy-794mc"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.646223    5836 topology_manager.go:215] "Topology Admit Handler" podUID="41c486ba-f8e7-49ce-a5e0-a8fd6a0cbfc3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cqkc4"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.646360    5836 topology_manager.go:215] "Topology Admit Handler" podUID="cb1b2baa-391c-407a-a97d-23d3d0d29f13" podNamespace="kube-system" podName="storage-provisioner"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.673968    5836 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.711333    5836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da9d80f8-9325-46df-813b-1e3801cf3e88-lib-modules\") pod \"kube-proxy-794mc\" (UID: \"da9d80f8-9325-46df-813b-1e3801cf3e88\") " pod="kube-system/kube-proxy-794mc"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.711427    5836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cb1b2baa-391c-407a-a97d-23d3d0d29f13-tmp\") pod \"storage-provisioner\" (UID: \"cb1b2baa-391c-407a-a97d-23d3d0d29f13\") " pod="kube-system/storage-provisioner"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.711468    5836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da9d80f8-9325-46df-813b-1e3801cf3e88-xtables-lock\") pod \"kube-proxy-794mc\" (UID: \"da9d80f8-9325-46df-813b-1e3801cf3e88\") " pod="kube-system/kube-proxy-794mc"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.947583    5836 scope.go:117] "RemoveContainer" containerID="acc7c4a166a5a62287d14bee0f6e0ee7f4b2968b5aca09f54f72056503724a93"
	Apr 29 19:04:56 functional-980800 kubelet[5836]: I0429 19:04:56.948152    5836 scope.go:117] "RemoveContainer" containerID="7707b7f5ceef09788c54308ce3fe675ee37d2691a75a0a5e79e600786576e00e"
	Apr 29 19:05:51 functional-980800 kubelet[5836]: E0429 19:05:51.828018    5836 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:05:51 functional-980800 kubelet[5836]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:05:51 functional-980800 kubelet[5836]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:05:51 functional-980800 kubelet[5836]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:05:51 functional-980800 kubelet[5836]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:06:51 functional-980800 kubelet[5836]: E0429 19:06:51.822429    5836 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:06:51 functional-980800 kubelet[5836]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:06:51 functional-980800 kubelet[5836]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:06:51 functional-980800 kubelet[5836]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:06:51 functional-980800 kubelet[5836]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [7707b7f5ceef] <==
	
	
	==> storage-provisioner [8d4758a75129] <==
	I0429 19:04:57.312924       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 19:04:57.363471       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 19:04:57.363536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 19:05:14.791124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 19:05:14.792038       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-980800_f476bb2f-56ff-4a9e-a890-42cc75424c26!
	I0429 19:05:14.793661       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e26989cc-64a1-49b4-b189-4532306ca68a", APIVersion:"v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-980800_f476bb2f-56ff-4a9e-a890-42cc75424c26 became leader
	I0429 19:05:14.893630       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-980800_f476bb2f-56ff-4a9e-a890-42cc75424c26!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:06:59.496994    6244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-980800 -n functional-980800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-980800 -n functional-980800: (12.3134539s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-980800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (34.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-980800 config unset cpus" to be -""- but got *"W0429 19:10:13.473992     920 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-980800 config get cpus: exit status 14 (407.9624ms)

                                                
                                                
** stderr ** 
	W0429 19:10:13.950952   13860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-980800 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0429 19:10:13.950952   13860 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-980800 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0429 19:10:14.613825    7676 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-980800 config get cpus" to be -""- but got *"W0429 19:10:15.049207   13644 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-980800 config unset cpus" to be -""- but got *"W0429 19:10:15.444340    4324 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-980800 config get cpus: exit status 14 (359.8497ms)

                                                
                                                
** stderr ** 
	W0429 19:10:15.844501   10060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-980800 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0429 19:10:15.844501   10060 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-980800 service --namespace=default --https --url hello-node: exit status 1 (15.0333981s)

                                                
                                                
** stderr ** 
	W0429 19:12:00.762770    9484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-980800 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-980800 service hello-node --url --format={{.IP}}: exit status 1 (15.0164541s)

                                                
                                                
** stderr ** 
	W0429 19:12:15.847748   12304 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-980800 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-980800 service hello-node --url: exit status 1 (15.036971s)

                                                
                                                
** stderr ** 
	W0429 19:12:30.844996    9884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-980800 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (70.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7nt6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7nt6 -- sh -c "ping -c 1 172.17.240.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7nt6 -- sh -c "ping -c 1 172.17.240.1": exit status 1 (10.597609s)

                                                
                                                
-- stdout --
	PING 172.17.240.1 (172.17.240.1): 56 data bytes
	
	--- 172.17.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:31:28.778711    3628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.240.1) from pod (busybox-fc5497c4f-k7nt6): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7rdw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7rdw -- sh -c "ping -c 1 172.17.240.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7rdw -- sh -c "ping -c 1 172.17.240.1": exit status 1 (10.5593365s)

                                                
                                                
-- stdout --
	PING 172.17.240.1 (172.17.240.1): 56 data bytes
	
	--- 172.17.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:31:39.967237    6392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.240.1) from pod (busybox-fc5497c4f-k7rdw): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-txsvr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-txsvr -- sh -c "ping -c 1 172.17.240.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-txsvr -- sh -c "ping -c 1 172.17.240.1": exit status 1 (10.6010847s)

                                                
                                                
-- stdout --
	PING 172.17.240.1 (172.17.240.1): 56 data bytes
	
	--- 172.17.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:31:51.126935    7200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.240.1) from pod (busybox-fc5497c4f-txsvr): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-513500 -n ha-513500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-513500 -n ha-513500: (12.5920338s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 logs -n 25: (9.2216465s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-980800                    | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:13 UTC | 29 Apr 24 19:13 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-980800 image build -t     | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:13 UTC | 29 Apr 24 19:13 UTC |
	|         | localhost/my-image:functional-980800 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-980800 image ls           | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:13 UTC | 29 Apr 24 19:14 UTC |
	| delete  | -p functional-980800                 | functional-980800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:17 UTC | 29 Apr 24 19:19 UTC |
	| start   | -p ha-513500 --wait=true             | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:19 UTC | 29 Apr 24 19:30 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- apply -f             | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- rollout status       | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- get pods -o          | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- get pods -o          | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-k7nt6 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-k7rdw --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-txsvr --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-k7nt6 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-k7rdw --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-txsvr --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-k7nt6 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-k7rdw -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-txsvr -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- get pods -o          | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-k7nt6              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC |                     |
	|         | busybox-fc5497c4f-k7nt6 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.240.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-k7rdw              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC |                     |
	|         | busybox-fc5497c4f-k7rdw -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.240.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC | 29 Apr 24 19:31 UTC |
	|         | busybox-fc5497c4f-txsvr              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-513500 -- exec                 | ha-513500         | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:31 UTC |                     |
	|         | busybox-fc5497c4f-txsvr -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.240.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:19:10
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:19:10.246588   14108 out.go:291] Setting OutFile to fd 1360 ...
	I0429 19:19:10.246588   14108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:19:10.246588   14108 out.go:304] Setting ErrFile to fd 1400...
	I0429 19:19:10.246588   14108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:19:10.270559   14108 out.go:298] Setting JSON to false
	I0429 19:19:10.274558   14108 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20289,"bootTime":1714398060,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 19:19:10.274558   14108 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 19:19:10.279558   14108 out.go:177] * [ha-513500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 19:19:10.286568   14108 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:19:10.285566   14108 notify.go:220] Checking for updates...
	I0429 19:19:10.291567   14108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:19:10.293560   14108 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 19:19:10.296738   14108 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:19:10.300635   14108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:19:10.304735   14108 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:19:15.746631   14108 out.go:177] * Using the hyperv driver based on user configuration
	I0429 19:19:15.751891   14108 start.go:297] selected driver: hyperv
	I0429 19:19:15.751891   14108 start.go:901] validating driver "hyperv" against <nil>
	I0429 19:19:15.751891   14108 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:19:15.805253   14108 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 19:19:15.806756   14108 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:19:15.807278   14108 cni.go:84] Creating CNI manager for ""
	I0429 19:19:15.807278   14108 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 19:19:15.807278   14108 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 19:19:15.807525   14108 start.go:340] cluster config:
	{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:19:15.807525   14108 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:19:15.812737   14108 out.go:177] * Starting "ha-513500" primary control-plane node in "ha-513500" cluster
	I0429 19:19:15.815539   14108 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 19:19:15.816075   14108 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 19:19:15.816075   14108 cache.go:56] Caching tarball of preloaded images
	I0429 19:19:15.816075   14108 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 19:19:15.816702   14108 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 19:19:15.817058   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:19:15.817582   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json: {Name:mk44f90f8510bd5a50ac9a4fb1e24e93a65c8594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:19:15.818541   14108 start.go:360] acquireMachinesLock for ha-513500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:19:15.818541   14108 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-513500"
	I0429 19:19:15.819064   14108 start.go:93] Provisioning new machine with config: &{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:19:15.819156   14108 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 19:19:15.823822   14108 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 19:19:15.823822   14108 start.go:159] libmachine.API.Create for "ha-513500" (driver="hyperv")
	I0429 19:19:15.823822   14108 client.go:168] LocalClient.Create starting
	I0429 19:19:15.823822   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 19:19:15.824828   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:19:15.824828   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:19:15.825406   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 19:19:15.825406   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:19:15.825406   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:19:15.825406   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 19:19:17.955349   14108 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 19:19:17.955349   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:17.956362   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 19:19:19.786601   14108 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 19:19:19.787270   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:19.787373   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:19:21.362994   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:19:21.362994   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:21.363714   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:19:24.969886   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:19:24.969952   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:24.972613   14108 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:19:25.529263   14108 main.go:141] libmachine: Creating SSH key...
	I0429 19:19:25.667238   14108 main.go:141] libmachine: Creating VM...
	I0429 19:19:25.667238   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:19:28.583767   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:19:28.584862   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:28.584862   14108 main.go:141] libmachine: Using switch "Default Switch"
	I0429 19:19:28.585014   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:19:30.459472   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:19:30.460556   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:30.460556   14108 main.go:141] libmachine: Creating VHD
	I0429 19:19:30.460663   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 19:19:34.192917   14108 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D03D646A-0F76-4175-BEF8-7B7ECC51E326
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 19:19:34.192985   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:34.192985   14108 main.go:141] libmachine: Writing magic tar header
	I0429 19:19:34.193146   14108 main.go:141] libmachine: Writing SSH key tar header
	I0429 19:19:34.206514   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 19:19:37.358361   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:37.358888   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:37.358888   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\disk.vhd' -SizeBytes 20000MB
	I0429 19:19:39.911695   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:39.911781   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:39.911781   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-513500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 19:19:43.586735   14108 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-513500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 19:19:43.586735   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:43.587582   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-513500 -DynamicMemoryEnabled $false
	I0429 19:19:45.801894   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:45.801894   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:45.801975   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-513500 -Count 2
	I0429 19:19:47.960247   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:47.960247   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:47.961008   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-513500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\boot2docker.iso'
	I0429 19:19:50.527908   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:50.527908   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:50.527908   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-513500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\disk.vhd'
	I0429 19:19:53.221294   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:53.221485   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:53.221485   14108 main.go:141] libmachine: Starting VM...
	I0429 19:19:53.221546   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-513500
	I0429 19:19:56.276471   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:56.277467   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:56.277517   14108 main.go:141] libmachine: Waiting for host to start...
	I0429 19:19:56.277622   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:19:58.505529   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:19:58.505529   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:58.505529   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:01.060141   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:20:01.060141   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:02.063828   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:04.275033   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:04.275428   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:04.275517   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:06.784135   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:20:06.784612   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:07.800321   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:09.970959   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:09.971020   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:09.971268   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:12.479461   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:20:12.479461   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:13.484174   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:15.688730   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:15.688730   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:15.688820   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:18.209336   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:20:18.209336   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:19.224347   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:21.361528   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:21.361528   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:21.361528   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:24.072091   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:24.072091   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:24.072091   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:26.221431   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:26.221825   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:26.221910   14108 machine.go:94] provisionDockerMachine start ...
	I0429 19:20:26.222027   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:28.380817   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:28.380998   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:28.381098   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:30.976776   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:30.976776   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:30.983524   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:20:30.993618   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:20:30.993618   14108 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:20:31.132229   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 19:20:31.132391   14108 buildroot.go:166] provisioning hostname "ha-513500"
	I0429 19:20:31.132391   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:33.325249   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:33.325249   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:33.325545   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:35.881046   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:35.881125   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:35.888317   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:20:35.888900   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:20:35.888900   14108 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-513500 && echo "ha-513500" | sudo tee /etc/hostname
	I0429 19:20:36.036259   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-513500
	
	I0429 19:20:36.036391   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:38.113040   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:38.114051   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:38.114051   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:40.630914   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:40.631217   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:40.637214   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:20:40.637889   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:20:40.637889   14108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:20:40.784649   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:20:40.784649   14108 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 19:20:40.784649   14108 buildroot.go:174] setting up certificates
	I0429 19:20:40.784649   14108 provision.go:84] configureAuth start
	I0429 19:20:40.785222   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:42.878866   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:42.878866   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:42.878960   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:45.430770   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:45.430952   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:45.431037   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:47.544305   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:47.544305   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:47.545029   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:50.080352   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:50.081024   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:50.081024   14108 provision.go:143] copyHostCerts
	I0429 19:20:50.081024   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 19:20:50.081024   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 19:20:50.081024   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 19:20:50.082037   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 19:20:50.082963   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 19:20:50.082963   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 19:20:50.083497   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 19:20:50.083567   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 19:20:50.084221   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 19:20:50.084855   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 19:20:50.084855   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 19:20:50.085390   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 19:20:50.086284   14108 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-513500 san=[127.0.0.1 172.17.240.42 ha-513500 localhost minikube]
	I0429 19:20:50.333962   14108 provision.go:177] copyRemoteCerts
	I0429 19:20:50.347400   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:20:50.347483   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:52.478342   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:52.478342   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:52.478549   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:55.045866   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:55.045866   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:55.046174   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:20:55.157056   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8096221s)
	I0429 19:20:55.157251   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 19:20:55.157251   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 19:20:55.204748   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 19:20:55.204748   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 19:20:55.252301   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 19:20:55.256671   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 19:20:55.307875   14108 provision.go:87] duration metric: took 14.5231243s to configureAuth
	I0429 19:20:55.307875   14108 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:20:55.308484   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:20:55.308484   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:57.428610   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:57.428817   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:57.428914   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:00.005952   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:00.006879   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:00.013913   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:21:00.014453   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:21:00.014453   14108 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 19:21:00.151165   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 19:21:00.151165   14108 buildroot.go:70] root file system type: tmpfs
	I0429 19:21:00.151702   14108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 19:21:00.151846   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:02.320088   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:02.321063   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:02.321063   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:04.901561   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:04.902359   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:04.908819   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:21:04.909575   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:21:04.909575   14108 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 19:21:05.074578   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 19:21:05.074578   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:07.217506   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:07.218030   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:07.218133   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:09.813495   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:09.813663   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:09.820592   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:21:09.821328   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:21:09.821328   14108 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 19:21:12.033597   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 19:21:12.033597   14108 machine.go:97] duration metric: took 45.8113658s to provisionDockerMachine
	I0429 19:21:12.033597   14108 client.go:171] duration metric: took 1m56.2089575s to LocalClient.Create
	I0429 19:21:12.033597   14108 start.go:167] duration metric: took 1m56.2089575s to libmachine.API.Create "ha-513500"
	I0429 19:21:12.034179   14108 start.go:293] postStartSetup for "ha-513500" (driver="hyperv")
	I0429 19:21:12.034179   14108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:21:12.045874   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:21:12.045874   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:14.173354   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:14.173500   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:14.173577   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:16.757120   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:16.757251   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:16.757892   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:21:16.874631   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8286172s)
	I0429 19:21:16.889137   14108 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:21:16.901704   14108 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:21:16.901862   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 19:21:16.902539   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 19:21:16.904359   14108 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 19:21:16.904480   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 19:21:16.920436   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:21:16.941353   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 19:21:16.990264   14108 start.go:296] duration metric: took 4.9560505s for postStartSetup
	I0429 19:21:16.993819   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:19.118124   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:19.118403   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:19.118403   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:21.676206   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:21.676550   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:21.676738   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:21:21.679851   14108 start.go:128] duration metric: took 2m5.8597325s to createHost
	I0429 19:21:21.679934   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:23.812228   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:23.812228   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:23.812228   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:26.375058   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:26.375559   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:26.381728   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:21:26.382270   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:21:26.382270   14108 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:21:26.510156   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418486.510636075
	
	I0429 19:21:26.510156   14108 fix.go:216] guest clock: 1714418486.510636075
	I0429 19:21:26.510156   14108 fix.go:229] Guest: 2024-04-29 19:21:26.510636075 +0000 UTC Remote: 2024-04-29 19:21:21.6798513 +0000 UTC m=+131.612114101 (delta=4.830784775s)
	I0429 19:21:26.510156   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:28.650272   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:28.650572   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:28.650572   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:31.267724   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:31.267898   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:31.275157   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:21:31.275835   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:21:31.275884   14108 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714418486
	I0429 19:21:31.425770   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 19:21:26 UTC 2024
	
	I0429 19:21:31.425837   14108 fix.go:236] clock set: Mon Apr 29 19:21:26 UTC 2024
	 (err=<nil>)
	I0429 19:21:31.425837   14108 start.go:83] releasing machines lock for "ha-513500", held for 2m15.6063439s
	I0429 19:21:31.426110   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:33.605823   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:33.605823   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:33.605910   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:36.180062   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:36.180062   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:36.184523   14108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:21:36.184615   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:36.197374   14108 ssh_runner.go:195] Run: cat /version.json
	I0429 19:21:36.198387   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:38.323743   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:38.323743   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:38.323958   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:38.331721   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:38.331721   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:38.331721   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:41.002676   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:41.003681   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:41.004315   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:21:41.028036   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:41.028036   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:41.029326   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:21:41.164966   14108 ssh_runner.go:235] Completed: cat /version.json: (4.9675576s)
	I0429 19:21:41.164966   14108 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9804083s)
	I0429 19:21:41.180799   14108 ssh_runner.go:195] Run: systemctl --version
	I0429 19:21:41.203982   14108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:21:41.212207   14108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:21:41.225252   14108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:21:41.255874   14108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:21:41.255874   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:21:41.256251   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:21:41.309992   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 19:21:41.342790   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 19:21:41.361245   14108 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 19:21:41.374831   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 19:21:41.407402   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:21:41.447005   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 19:21:41.487513   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:21:41.523062   14108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:21:41.557006   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 19:21:41.592831   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 19:21:41.631741   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 19:21:41.667496   14108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:21:41.702083   14108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:21:41.736581   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:21:41.945761   14108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 19:21:41.982603   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:21:41.995896   14108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 19:21:42.037880   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:21:42.071132   14108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:21:42.120351   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:21:42.160312   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:21:42.198546   14108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 19:21:42.265144   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:21:42.289256   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:21:42.341563   14108 ssh_runner.go:195] Run: which cri-dockerd
	I0429 19:21:42.361428   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 19:21:42.380469   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 19:21:42.427129   14108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 19:21:42.640971   14108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 19:21:42.840349   14108 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 19:21:42.840671   14108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 19:21:42.890853   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:21:43.111882   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 19:21:45.708116   14108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5962161s)
	I0429 19:21:45.721198   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 19:21:45.762639   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:21:45.805458   14108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 19:21:46.023417   14108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 19:21:46.254513   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:21:46.474984   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 19:21:46.521315   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:21:46.563144   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:21:46.784080   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 19:21:46.908286   14108 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 19:21:46.921654   14108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 19:21:46.932302   14108 start.go:562] Will wait 60s for crictl version
	I0429 19:21:46.943667   14108 ssh_runner.go:195] Run: which crictl
	I0429 19:21:46.965242   14108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:21:47.031807   14108 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 19:21:47.042045   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:21:47.092589   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:21:47.169869   14108 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 19:21:47.170444   14108 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 19:21:47.174458   14108 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 19:21:47.175021   14108 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 19:21:47.175021   14108 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 19:21:47.175021   14108 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 19:21:47.177913   14108 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 19:21:47.177913   14108 ip.go:210] interface addr: 172.17.240.1/20
	I0429 19:21:47.193249   14108 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 19:21:47.201249   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:21:47.242307   14108 kubeadm.go:877] updating cluster {Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:21:47.242450   14108 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 19:21:47.255808   14108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 19:21:47.281163   14108 docker.go:685] Got preloaded images: 
	I0429 19:21:47.281256   14108 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 19:21:47.297948   14108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 19:21:47.330535   14108 ssh_runner.go:195] Run: which lz4
	I0429 19:21:47.337342   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 19:21:47.349898   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 19:21:47.358360   14108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 19:21:47.358616   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 19:21:49.119492   14108 docker.go:649] duration metric: took 1.7821379s to copy over tarball
	I0429 19:21:49.134247   14108 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 19:21:58.058678   14108 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9243688s)
	I0429 19:21:58.058678   14108 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 19:21:58.130090   14108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 19:21:58.153440   14108 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 19:21:58.199280   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:21:58.426469   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 19:22:01.858330   14108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.431837s)
	I0429 19:22:01.871686   14108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 19:22:01.897166   14108 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 19:22:01.897166   14108 cache_images.go:84] Images are preloaded, skipping loading
	I0429 19:22:01.897166   14108 kubeadm.go:928] updating node { 172.17.240.42 8443 v1.30.0 docker true true} ...
	I0429 19:22:01.897166   14108 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-513500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.240.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:22:01.908276   14108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 19:22:01.948140   14108 cni.go:84] Creating CNI manager for ""
	I0429 19:22:01.948234   14108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 19:22:01.948234   14108 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:22:01.948289   14108 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.240.42 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-513500 NodeName:ha-513500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.240.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.240.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 19:22:01.948359   14108 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.240.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-513500"
	  kubeletExtraArgs:
	    node-ip: 172.17.240.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.240.42"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:22:01.948359   14108 kube-vip.go:115] generating kube-vip config ...
	I0429 19:22:01.962452   14108 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:22:01.995097   14108 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:22:01.995332   14108 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:22:02.012243   14108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:22:02.032079   14108 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:22:02.047246   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 19:22:02.069559   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0429 19:22:02.105306   14108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:22:02.143052   14108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0429 19:22:02.177699   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0429 19:22:02.224654   14108 ssh_runner.go:195] Run: grep 172.17.255.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:22:02.231752   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:22:02.269800   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:22:02.478831   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:22:02.515240   14108 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500 for IP: 172.17.240.42
	I0429 19:22:02.515240   14108 certs.go:194] generating shared ca certs ...
	I0429 19:22:02.515240   14108 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:02.516300   14108 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 19:22:02.516649   14108 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 19:22:02.516916   14108 certs.go:256] generating profile certs ...
	I0429 19:22:02.517563   14108 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.key
	I0429 19:22:02.517752   14108 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.crt with IP's: []
	I0429 19:22:02.651407   14108 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.crt ...
	I0429 19:22:02.652426   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.crt: {Name:mk5210789812ded2c429974ce014fe11cc92a699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:02.653895   14108 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.key ...
	I0429 19:22:02.653895   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.key: {Name:mk6113744d78fd6e93c7abad85557d1bc9ea4511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:02.654430   14108 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.6e29dd6d
	I0429 19:22:02.654430   14108 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.6e29dd6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.240.42 172.17.255.254]
	I0429 19:22:02.895592   14108 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.6e29dd6d ...
	I0429 19:22:02.895592   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.6e29dd6d: {Name:mkefbdf7c45d1d40d9809f8e3a48ec166982cc2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:02.897668   14108 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.6e29dd6d ...
	I0429 19:22:02.897668   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.6e29dd6d: {Name:mk6e50931413457f4c849441f1a52a798c4a39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:02.898759   14108 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.6e29dd6d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt
	I0429 19:22:02.911701   14108 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.6e29dd6d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key
	I0429 19:22:02.912612   14108 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key
	I0429 19:22:02.912612   14108 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt with IP's: []
	I0429 19:22:03.085370   14108 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt ...
	I0429 19:22:03.085370   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt: {Name:mk1836f19c366a42bc69dbe804cf2f6504d32531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:03.085964   14108 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key ...
	I0429 19:22:03.085964   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key: {Name:mkb633fcea49aec8fa95bf997683078363622fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:03.087151   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:22:03.088109   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:22:03.088292   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:22:03.088469   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:22:03.088668   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:22:03.088818   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:22:03.088982   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:22:03.098045   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:22:03.099024   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 19:22:03.099224   14108 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 19:22:03.099419   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 19:22:03.099419   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 19:22:03.099419   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 19:22:03.100109   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 19:22:03.100452   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 19:22:03.100452   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 19:22:03.101033   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:22:03.101189   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 19:22:03.102547   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:22:03.167021   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 19:22:03.222763   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:22:03.275180   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:22:03.326748   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 19:22:03.376175   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 19:22:03.421660   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:22:03.464456   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 19:22:03.513935   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 19:22:03.567361   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:22:03.627448   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 19:22:03.683367   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:22:03.731739   14108 ssh_runner.go:195] Run: openssl version
	I0429 19:22:03.755665   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 19:22:03.795635   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 19:22:03.804983   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 19:22:03.819082   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 19:22:03.844317   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:22:03.880455   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:22:03.916478   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:22:03.923628   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:22:03.939033   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:22:03.963507   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:22:03.999626   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 19:22:04.034853   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 19:22:04.041310   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 19:22:04.056279   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 19:22:04.082562   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 19:22:04.116996   14108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:22:04.124404   14108 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:22:04.124404   14108 kubeadm.go:391] StartCluster: {Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:22:04.136181   14108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 19:22:04.173513   14108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 19:22:04.214952   14108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 19:22:04.255841   14108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 19:22:04.276434   14108 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 19:22:04.276434   14108 kubeadm.go:156] found existing configuration files:
	
	I0429 19:22:04.293023   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 19:22:04.312938   14108 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 19:22:04.326258   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 19:22:04.359216   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 19:22:04.379021   14108 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 19:22:04.391884   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 19:22:04.424241   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 19:22:04.442153   14108 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 19:22:04.457711   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 19:22:04.490976   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 19:22:04.510981   14108 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 19:22:04.527622   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 19:22:04.546429   14108 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 19:22:05.075277   14108 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 19:22:19.880857   14108 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 19:22:19.880985   14108 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 19:22:19.881209   14108 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 19:22:19.881461   14108 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 19:22:19.881461   14108 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 19:22:19.881461   14108 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 19:22:19.886543   14108 out.go:204]   - Generating certificates and keys ...
	I0429 19:22:19.886713   14108 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 19:22:19.886713   14108 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 19:22:19.886713   14108 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 19:22:19.887361   14108 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 19:22:19.887544   14108 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 19:22:19.887544   14108 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 19:22:19.887544   14108 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 19:22:19.887544   14108 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-513500 localhost] and IPs [172.17.240.42 127.0.0.1 ::1]
	I0429 19:22:19.888152   14108 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 19:22:19.888291   14108 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-513500 localhost] and IPs [172.17.240.42 127.0.0.1 ::1]
	I0429 19:22:19.888291   14108 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 19:22:19.888291   14108 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 19:22:19.888291   14108 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 19:22:19.888874   14108 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 19:22:19.888924   14108 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 19:22:19.888924   14108 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 19:22:19.888924   14108 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 19:22:19.888924   14108 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 19:22:19.889486   14108 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 19:22:19.889565   14108 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 19:22:19.889565   14108 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 19:22:19.892055   14108 out.go:204]   - Booting up control plane ...
	I0429 19:22:19.892383   14108 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 19:22:19.892600   14108 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 19:22:19.892800   14108 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 19:22:19.892800   14108 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 19:22:19.892800   14108 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 19:22:19.892800   14108 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 19:22:19.893522   14108 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 19:22:19.893522   14108 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 19:22:19.893860   14108 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002752859s
	I0429 19:22:19.893938   14108 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 19:22:19.893938   14108 kubeadm.go:309] [api-check] The API server is healthy after 8.85459715s
	I0429 19:22:19.893938   14108 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 19:22:19.895147   14108 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 19:22:19.895147   14108 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 19:22:19.895401   14108 kubeadm.go:309] [mark-control-plane] Marking the node ha-513500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 19:22:19.895401   14108 kubeadm.go:309] [bootstrap-token] Using token: ljuqwa.dibj2v5bire23t8b
	I0429 19:22:19.898780   14108 out.go:204]   - Configuring RBAC rules ...
	I0429 19:22:19.899397   14108 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 19:22:19.899397   14108 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 19:22:19.899397   14108 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 19:22:19.899978   14108 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 19:22:19.899978   14108 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 19:22:19.899978   14108 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 19:22:19.900802   14108 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 19:22:19.900802   14108 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 19:22:19.901010   14108 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 19:22:19.901010   14108 kubeadm.go:309] 
	I0429 19:22:19.901236   14108 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 19:22:19.901309   14108 kubeadm.go:309] 
	I0429 19:22:19.901488   14108 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 19:22:19.901542   14108 kubeadm.go:309] 
	I0429 19:22:19.901542   14108 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 19:22:19.901733   14108 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 19:22:19.901733   14108 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 19:22:19.901733   14108 kubeadm.go:309] 
	I0429 19:22:19.902018   14108 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 19:22:19.902018   14108 kubeadm.go:309] 
	I0429 19:22:19.902114   14108 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 19:22:19.902114   14108 kubeadm.go:309] 
	I0429 19:22:19.902114   14108 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 19:22:19.902114   14108 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 19:22:19.902670   14108 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 19:22:19.902670   14108 kubeadm.go:309] 
	I0429 19:22:19.902840   14108 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 19:22:19.902976   14108 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 19:22:19.902976   14108 kubeadm.go:309] 
	I0429 19:22:19.902976   14108 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ljuqwa.dibj2v5bire23t8b \
	I0429 19:22:19.902976   14108 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 19:22:19.903664   14108 kubeadm.go:309] 	--control-plane 
	I0429 19:22:19.903664   14108 kubeadm.go:309] 
	I0429 19:22:19.903832   14108 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 19:22:19.904071   14108 kubeadm.go:309] 
	I0429 19:22:19.904384   14108 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ljuqwa.dibj2v5bire23t8b \
	I0429 19:22:19.904571   14108 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 19:22:19.904571   14108 cni.go:84] Creating CNI manager for ""
	I0429 19:22:19.904571   14108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 19:22:19.909581   14108 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 19:22:19.927899   14108 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 19:22:19.936766   14108 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 19:22:19.936766   14108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 19:22:19.992085   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 19:22:20.662675   14108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 19:22:20.676671   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-513500 minikube.k8s.io/updated_at=2024_04_29T19_22_20_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-513500 minikube.k8s.io/primary=true
	I0429 19:22:20.676671   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:20.683259   14108 ops.go:34] apiserver oom_adj: -16
	I0429 19:22:20.976706   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:21.480112   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:21.984150   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:22.490723   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:22.978520   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:23.478414   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:23.981472   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:24.483278   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:24.985734   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:25.486068   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:25.988619   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:26.477862   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:26.984274   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:27.488001   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:27.988895   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:28.476719   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:28.983088   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:29.490792   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:29.984863   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:30.482257   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:30.983218   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:31.490342   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:31.980088   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:32.489634   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:32.674523   14108 kubeadm.go:1107] duration metric: took 12.0117638s to wait for elevateKubeSystemPrivileges
	W0429 19:22:32.674711   14108 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 19:22:32.674711   14108 kubeadm.go:393] duration metric: took 28.5501076s to StartCluster
	I0429 19:22:32.674711   14108 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:32.674924   14108 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:22:32.676426   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:32.678543   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 19:22:32.678744   14108 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:22:32.678869   14108 start.go:240] waiting for startup goroutines ...
	I0429 19:22:32.678988   14108 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 19:22:32.678988   14108 addons.go:69] Setting storage-provisioner=true in profile "ha-513500"
	I0429 19:22:32.678988   14108 addons.go:234] Setting addon storage-provisioner=true in "ha-513500"
	I0429 19:22:32.678988   14108 addons.go:69] Setting default-storageclass=true in profile "ha-513500"
	I0429 19:22:32.678988   14108 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-513500"
	I0429 19:22:32.678988   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:22:32.678988   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:22:32.679890   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:22:32.679890   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:22:32.878317   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 19:22:33.335459   14108 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 19:22:35.018264   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:22:35.018264   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:35.021042   14108 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:22:35.023485   14108 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 19:22:35.023485   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 19:22:35.023485   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:22:35.038702   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:22:35.038702   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:35.039729   14108 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:22:35.039729   14108 kapi.go:59] client config for ha-513500: &rest.Config{Host:"https://172.17.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 19:22:35.043721   14108 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 19:22:35.044711   14108 addons.go:234] Setting addon default-storageclass=true in "ha-513500"
	I0429 19:22:35.046690   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:22:35.047690   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:22:37.327113   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:22:37.327113   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:37.327113   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:22:37.337553   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:22:37.337602   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:37.337669   14108 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 19:22:37.337669   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 19:22:37.337669   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:22:39.586787   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:22:39.587710   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:39.587710   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:22:40.058636   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:22:40.058681   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:40.058736   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:22:40.235973   14108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 19:22:42.286299   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:22:42.286587   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:42.287007   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:22:42.447645   14108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 19:22:42.634785   14108 round_trippers.go:463] GET https://172.17.255.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 19:22:42.634905   14108 round_trippers.go:469] Request Headers:
	I0429 19:22:42.634905   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:22:42.634905   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:22:42.649420   14108 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 19:22:42.650493   14108 round_trippers.go:463] PUT https://172.17.255.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 19:22:42.650493   14108 round_trippers.go:469] Request Headers:
	I0429 19:22:42.650493   14108 round_trippers.go:473]     Content-Type: application/json
	I0429 19:22:42.650493   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:22:42.650493   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:22:42.654996   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:22:42.661067   14108 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 19:22:42.665184   14108 addons.go:505] duration metric: took 9.9861584s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 19:22:42.665184   14108 start.go:245] waiting for cluster config update ...
	I0429 19:22:42.665184   14108 start.go:254] writing updated cluster config ...
	I0429 19:22:42.670950   14108 out.go:177] 
	I0429 19:22:42.682225   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:22:42.682225   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:22:42.687758   14108 out.go:177] * Starting "ha-513500-m02" control-plane node in "ha-513500" cluster
	I0429 19:22:42.693746   14108 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 19:22:42.693746   14108 cache.go:56] Caching tarball of preloaded images
	I0429 19:22:42.694820   14108 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 19:22:42.694820   14108 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 19:22:42.695104   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:22:42.696728   14108 start.go:360] acquireMachinesLock for ha-513500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:22:42.696728   14108 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-513500-m02"
	I0429 19:22:42.697724   14108 start.go:93] Provisioning new machine with config: &{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:22:42.697724   14108 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 19:22:42.702727   14108 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 19:22:42.703790   14108 start.go:159] libmachine.API.Create for "ha-513500" (driver="hyperv")
	I0429 19:22:42.703790   14108 client.go:168] LocalClient.Create starting
	I0429 19:22:42.703969   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 19:22:42.703969   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:22:42.704478   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:22:42.704478   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 19:22:42.704478   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:22:42.704849   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:22:42.704849   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 19:22:44.694752   14108 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 19:22:44.695727   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:44.696027   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 19:22:46.515411   14108 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 19:22:46.515411   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:46.515936   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:22:48.067421   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:22:48.067421   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:48.067421   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:22:51.776981   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:22:51.777187   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:51.779781   14108 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:22:52.351441   14108 main.go:141] libmachine: Creating SSH key...
	I0429 19:22:53.016072   14108 main.go:141] libmachine: Creating VM...
	I0429 19:22:53.016072   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:22:55.977376   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:22:55.977451   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:55.977508   14108 main.go:141] libmachine: Using switch "Default Switch"
	I0429 19:22:55.977508   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:22:57.860081   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:22:57.860741   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:57.860806   14108 main.go:141] libmachine: Creating VHD
	I0429 19:22:57.860881   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 19:23:01.682173   14108 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 90435D32-26F0-487A-9FE2-FF887D35579A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 19:23:01.682173   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:01.683147   14108 main.go:141] libmachine: Writing magic tar header
	I0429 19:23:01.683147   14108 main.go:141] libmachine: Writing SSH key tar header
	I0429 19:23:01.694392   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 19:23:04.924738   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:04.925558   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:04.925558   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\disk.vhd' -SizeBytes 20000MB
	I0429 19:23:07.474439   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:07.475027   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:07.475191   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-513500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 19:23:11.201271   14108 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-513500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 19:23:11.201271   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:11.202071   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-513500-m02 -DynamicMemoryEnabled $false
	I0429 19:23:13.442056   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:13.442056   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:13.442727   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-513500-m02 -Count 2
	I0429 19:23:15.631699   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:15.631864   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:15.632028   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-513500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\boot2docker.iso'
	I0429 19:23:18.227622   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:18.228662   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:18.228662   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-513500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\disk.vhd'
	I0429 19:23:20.933469   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:20.934234   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:20.934234   14108 main.go:141] libmachine: Starting VM...
	I0429 19:23:20.934398   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-513500-m02
	I0429 19:23:24.134056   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:24.134056   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:24.134056   14108 main.go:141] libmachine: Waiting for host to start...
	I0429 19:23:24.134693   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:26.403244   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:26.403244   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:26.404156   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:29.026385   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:29.026385   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:30.038616   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:32.301643   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:32.301643   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:32.301746   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:34.934496   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:34.935483   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:35.948540   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:38.130552   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:38.130552   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:38.130656   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:40.689798   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:40.690796   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:41.705793   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:43.910174   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:43.910174   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:43.911230   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:46.498788   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:46.498788   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:47.505991   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:49.726197   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:49.727107   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:49.727277   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:52.377408   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:23:52.377548   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:52.377657   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:54.533125   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:54.533125   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:54.533452   14108 machine.go:94] provisionDockerMachine start ...
	I0429 19:23:54.533514   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:56.724704   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:56.724704   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:56.725033   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:59.373495   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:23:59.373495   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:59.381479   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:23:59.393846   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:23:59.393846   14108 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:23:59.516341   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 19:23:59.516341   14108 buildroot.go:166] provisioning hostname "ha-513500-m02"
	I0429 19:23:59.516713   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:01.717814   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:01.718799   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:01.718876   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:04.352439   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:04.352439   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:04.358740   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:04.359729   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:04.359729   14108 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-513500-m02 && echo "ha-513500-m02" | sudo tee /etc/hostname
	I0429 19:24:04.518886   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-513500-m02
	
	I0429 19:24:04.518886   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:06.695571   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:06.695571   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:06.695774   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:09.302859   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:09.302859   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:09.309047   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:09.309954   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:09.309954   14108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:24:09.449306   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:24:09.449369   14108 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 19:24:09.449432   14108 buildroot.go:174] setting up certificates
	I0429 19:24:09.449432   14108 provision.go:84] configureAuth start
	I0429 19:24:09.449551   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:11.611204   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:11.611204   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:11.611204   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:14.204124   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:14.204181   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:14.204181   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:16.394053   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:16.394053   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:16.394053   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:18.996611   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:18.996611   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:18.996611   14108 provision.go:143] copyHostCerts
	I0429 19:24:18.996611   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 19:24:18.997148   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 19:24:18.997284   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 19:24:18.997636   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 19:24:18.999199   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 19:24:18.999482   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 19:24:18.999579   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 19:24:19.000059   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 19:24:19.001109   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 19:24:19.001440   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 19:24:19.001440   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 19:24:19.001833   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 19:24:19.002946   14108 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-513500-m02 san=[127.0.0.1 172.17.247.146 ha-513500-m02 localhost minikube]
	I0429 19:24:19.569474   14108 provision.go:177] copyRemoteCerts
	I0429 19:24:19.583290   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:24:19.584045   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:21.725443   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:21.725443   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:21.726425   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:24.321726   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:24.321887   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:24.322415   14108 sshutil.go:53] new ssh client: &{IP:172.17.247.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\id_rsa Username:docker}
	I0429 19:24:24.433052   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.849725s)
	I0429 19:24:24.433244   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 19:24:24.433334   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 19:24:24.487049   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 19:24:24.487587   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 19:24:24.544495   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 19:24:24.545574   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:24:24.595058   14108 provision.go:87] duration metric: took 15.1455123s to configureAuth
	I0429 19:24:24.595150   14108 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:24:24.595875   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:24:24.595976   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:26.753046   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:26.753812   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:26.753886   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:29.439516   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:29.439516   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:29.446167   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:29.446469   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:29.446469   14108 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 19:24:29.579155   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 19:24:29.579258   14108 buildroot.go:70] root file system type: tmpfs
	I0429 19:24:29.579341   14108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 19:24:29.579341   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:31.770583   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:31.770583   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:31.770866   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:34.426910   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:34.426910   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:34.433550   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:34.433780   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:34.433780   14108 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.240.42"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 19:24:34.593721   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.240.42
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 19:24:34.594004   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:36.774062   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:36.774154   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:36.774154   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:39.416973   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:39.417065   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:39.425073   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:39.425963   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:39.425963   14108 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 19:24:41.683825   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 19:24:41.684360   14108 machine.go:97] duration metric: took 47.1505525s to provisionDockerMachine
	I0429 19:24:41.684403   14108 client.go:171] duration metric: took 1m58.9796893s to LocalClient.Create
	I0429 19:24:41.684403   14108 start.go:167] duration metric: took 1m58.9797324s to libmachine.API.Create "ha-513500"
	I0429 19:24:41.684451   14108 start.go:293] postStartSetup for "ha-513500-m02" (driver="hyperv")
	I0429 19:24:41.684492   14108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:24:41.698260   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:24:41.698260   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:43.817338   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:43.817394   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:43.817394   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:46.424449   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:46.425044   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:46.425565   14108 sshutil.go:53] new ssh client: &{IP:172.17.247.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\id_rsa Username:docker}
	I0429 19:24:46.536416   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8381192s)
	I0429 19:24:46.548421   14108 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:24:46.556456   14108 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:24:46.556584   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 19:24:46.557002   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 19:24:46.558160   14108 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 19:24:46.558160   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 19:24:46.571181   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:24:46.591542   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 19:24:46.641012   14108 start.go:296] duration metric: took 4.9565237s for postStartSetup
	I0429 19:24:46.643791   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:48.749521   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:48.749521   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:48.749521   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:51.437386   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:51.437386   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:51.437588   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:24:51.440618   14108 start.go:128] duration metric: took 2m8.7419384s to createHost
	I0429 19:24:51.440649   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:53.587469   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:53.588062   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:53.588137   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:56.212067   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:56.213101   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:56.219778   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:56.220420   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:56.220557   14108 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:24:56.344391   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418696.353753957
	
	I0429 19:24:56.344391   14108 fix.go:216] guest clock: 1714418696.353753957
	I0429 19:24:56.344391   14108 fix.go:229] Guest: 2024-04-29 19:24:56.353753957 +0000 UTC Remote: 2024-04-29 19:24:51.4406499 +0000 UTC m=+341.371390601 (delta=4.913104057s)
	I0429 19:24:56.344566   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:58.475678   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:58.475678   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:58.475678   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:01.110386   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:25:01.110932   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:01.116677   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:25:01.117227   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:25:01.117311   14108 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714418696
	I0429 19:25:01.273062   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 19:24:56 UTC 2024
	
	I0429 19:25:01.273118   14108 fix.go:236] clock set: Mon Apr 29 19:24:56 UTC 2024
	 (err=<nil>)
	I0429 19:25:01.273118   14108 start.go:83] releasing machines lock for "ha-513500-m02", held for 2m18.5753683s
	I0429 19:25:01.273335   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:25:03.462278   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:03.462402   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:03.462402   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:06.086664   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:25:06.086664   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:06.090069   14108 out.go:177] * Found network options:
	I0429 19:25:06.092703   14108 out.go:177]   - NO_PROXY=172.17.240.42
	W0429 19:25:06.095186   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:25:06.097627   14108 out.go:177]   - NO_PROXY=172.17.240.42
	W0429 19:25:06.100032   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:25:06.101397   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:25:06.103891   14108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:25:06.104054   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:25:06.120702   14108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 19:25:06.120702   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:25:08.312868   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:08.313731   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:08.313731   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:08.336798   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:08.336798   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:08.337600   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:11.025171   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:25:11.026226   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:11.026296   14108 sshutil.go:53] new ssh client: &{IP:172.17.247.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\id_rsa Username:docker}
	I0429 19:25:11.047745   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:25:11.047745   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:11.048725   14108 sshutil.go:53] new ssh client: &{IP:172.17.247.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\id_rsa Username:docker}
	I0429 19:25:11.185386   14108 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0646475s)
	I0429 19:25:11.185506   14108 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.081578s)
	W0429 19:25:11.185506   14108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:25:11.198871   14108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:25:11.230596   14108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:25:11.231462   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:25:11.231620   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:25:11.283701   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 19:25:11.319389   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 19:25:11.342130   14108 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 19:25:11.355659   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 19:25:11.393063   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:25:11.432731   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 19:25:11.468686   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:25:11.503040   14108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:25:11.539336   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 19:25:11.574787   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 19:25:11.610038   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 19:25:11.643730   14108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:25:11.680657   14108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:25:11.714656   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:25:11.935177   14108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 19:25:11.970968   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:25:11.984646   14108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 19:25:12.023433   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:25:12.064882   14108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:25:12.112596   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:25:12.153528   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:25:12.195975   14108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 19:25:12.267622   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:25:12.295194   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:25:12.348928   14108 ssh_runner.go:195] Run: which cri-dockerd
	I0429 19:25:12.371080   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 19:25:12.393539   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 19:25:12.443495   14108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 19:25:12.671827   14108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 19:25:12.875804   14108 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 19:25:12.875869   14108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 19:25:12.926479   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:25:13.142509   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 19:25:15.731373   14108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.588724s)
	I0429 19:25:15.745518   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 19:25:15.787901   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:25:15.825831   14108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 19:25:16.046597   14108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 19:25:16.283031   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:25:16.508407   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 19:25:16.554913   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:25:16.593957   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:25:16.825578   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 19:25:16.960164   14108 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 19:25:16.973239   14108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 19:25:16.983680   14108 start.go:562] Will wait 60s for crictl version
	I0429 19:25:16.996395   14108 ssh_runner.go:195] Run: which crictl
	I0429 19:25:17.017470   14108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:25:17.074114   14108 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 19:25:17.083121   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:25:17.135123   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:25:17.172515   14108 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 19:25:17.177358   14108 out.go:177]   - env NO_PROXY=172.17.240.42
	I0429 19:25:17.181475   14108 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 19:25:17.186699   14108 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 19:25:17.186824   14108 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 19:25:17.186824   14108 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 19:25:17.186824   14108 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 19:25:17.191425   14108 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 19:25:17.191487   14108 ip.go:210] interface addr: 172.17.240.1/20
	I0429 19:25:17.211794   14108 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 19:25:17.219017   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:25:17.246348   14108 mustload.go:65] Loading cluster: ha-513500
	I0429 19:25:17.246855   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:25:17.248076   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:25:19.378111   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:19.378111   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:19.378699   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:25:19.379360   14108 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500 for IP: 172.17.247.146
	I0429 19:25:19.379360   14108 certs.go:194] generating shared ca certs ...
	I0429 19:25:19.379488   14108 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:25:19.380095   14108 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 19:25:19.380470   14108 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 19:25:19.380596   14108 certs.go:256] generating profile certs ...
	I0429 19:25:19.381215   14108 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.key
	I0429 19:25:19.381443   14108 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.1f34c545
	I0429 19:25:19.381548   14108 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.1f34c545 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.240.42 172.17.247.146 172.17.255.254]
	I0429 19:25:19.755547   14108 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.1f34c545 ...
	I0429 19:25:19.755547   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.1f34c545: {Name:mk271c88acfc6db25bfab47fbc94e7bcf34e85a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:25:19.757371   14108 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.1f34c545 ...
	I0429 19:25:19.757371   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.1f34c545: {Name:mke9a46b3c4416c9e568a9bbc772920966068d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:25:19.757884   14108 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.1f34c545 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt
	I0429 19:25:19.769937   14108 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.1f34c545 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key
	I0429 19:25:19.770967   14108 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key
	I0429 19:25:19.770967   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:25:19.770967   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:25:19.772397   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:25:19.772711   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:25:19.772711   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:25:19.773016   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:25:19.773016   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:25:19.773555   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:25:19.774130   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 19:25:19.774161   14108 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 19:25:19.774161   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 19:25:19.774890   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 19:25:19.775531   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 19:25:19.775531   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 19:25:19.776464   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 19:25:19.776464   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:25:19.776464   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 19:25:19.776998   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 19:25:19.777434   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:25:21.914884   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:21.914884   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:21.915703   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:24.531839   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:25:24.531897   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:24.531897   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:25:24.628583   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 19:25:24.636161   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 19:25:24.676521   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 19:25:24.684455   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0429 19:25:24.720397   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 19:25:24.728704   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 19:25:24.767379   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 19:25:24.774772   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 19:25:24.811910   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 19:25:24.820997   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 19:25:24.856129   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 19:25:24.863017   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 19:25:24.884075   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:25:24.939258   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 19:25:24.991498   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:25:25.044226   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:25:25.097875   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 19:25:25.150479   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0429 19:25:25.211064   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:25:25.267869   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 19:25:25.322930   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:25:25.378424   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 19:25:25.430950   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 19:25:25.481934   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 19:25:25.517248   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0429 19:25:25.556841   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 19:25:25.592728   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 19:25:25.627995   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 19:25:25.662790   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 19:25:25.697158   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 19:25:25.745194   14108 ssh_runner.go:195] Run: openssl version
	I0429 19:25:25.764761   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:25:25.798355   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:25:25.807673   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:25:25.821435   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:25:25.844480   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:25:25.880565   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 19:25:25.914326   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 19:25:25.921754   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 19:25:25.935357   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 19:25:25.956936   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 19:25:25.994271   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 19:25:26.031400   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 19:25:26.040323   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 19:25:26.054078   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 19:25:26.077741   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:25:26.124699   14108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:25:26.132687   14108 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:25:26.133013   14108 kubeadm.go:928] updating node {m02 172.17.247.146 8443 v1.30.0 docker true true} ...
	I0429 19:25:26.133209   14108 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-513500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.247.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:25:26.133209   14108 kube-vip.go:115] generating kube-vip config ...
	I0429 19:25:26.145834   14108 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:25:26.175290   14108 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:25:26.175397   14108 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:25:26.189293   14108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:25:26.218094   14108 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 19:25:26.231878   14108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 19:25:26.255583   14108 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0429 19:25:26.256086   14108 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0429 19:25:26.256173   14108 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0429 19:25:27.404293   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:25:27.419561   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:25:27.428010   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 19:25:27.428097   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 19:25:28.762151   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:25:28.781361   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:25:28.789528   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 19:25:28.789758   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 19:25:30.770435   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:25:30.798246   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:25:30.811970   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:25:30.821250   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 19:25:30.821537   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 19:25:31.630846   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 19:25:31.654971   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0429 19:25:31.692128   14108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:25:31.734693   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0429 19:25:31.787923   14108 ssh_runner.go:195] Run: grep 172.17.255.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:25:31.794797   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:25:31.839541   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:25:32.071289   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:25:32.102570   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:25:32.103394   14108 start.go:316] joinCluster: &{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.247.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:25:32.103510   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 19:25:32.103510   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:25:34.232794   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:34.232794   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:34.233820   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:36.914959   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:25:36.915836   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:36.916315   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:25:37.137071   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0330421s)
	I0429 19:25:37.137242   14108 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.247.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:25:37.137341   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0yzlj.jap8zibl02aqm219 --discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-513500-m02 --control-plane --apiserver-advertise-address=172.17.247.146 --apiserver-bind-port=8443"
	I0429 19:26:23.608723   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0yzlj.jap8zibl02aqm219 --discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-513500-m02 --control-plane --apiserver-advertise-address=172.17.247.146 --apiserver-bind-port=8443": (46.4710201s)
	I0429 19:26:23.608723   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 19:26:24.504734   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-513500-m02 minikube.k8s.io/updated_at=2024_04_29T19_26_24_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-513500 minikube.k8s.io/primary=false
	I0429 19:26:24.698086   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-513500-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 19:26:24.897813   14108 start.go:318] duration metric: took 52.7940082s to joinCluster
	I0429 19:26:24.897813   14108 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.247.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:26:24.903363   14108 out.go:177] * Verifying Kubernetes components...
	I0429 19:26:24.899034   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:26:24.920973   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:26:25.400157   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:26:25.431717   14108 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:26:25.432198   14108 kapi.go:59] client config for ha-513500: &rest.Config{Host:"https://172.17.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 19:26:25.432198   14108 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.255.254:8443 with https://172.17.240.42:8443
	I0429 19:26:25.433570   14108 node_ready.go:35] waiting up to 6m0s for node "ha-513500-m02" to be "Ready" ...
	I0429 19:26:25.433570   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:25.433570   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:25.433570   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:25.433570   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:25.450402   14108 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 19:26:25.938808   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:25.938890   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:25.938890   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:25.938890   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:25.946484   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:26:26.445084   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:26.445148   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:26.445148   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:26.445224   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:26.453753   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:26:26.935966   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:26.936079   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:26.936079   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:26.936135   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:26.944118   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:26:27.441818   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:27.441818   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:27.442180   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:27.442180   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:27.804807   14108 round_trippers.go:574] Response Status: 200 OK in 362 milliseconds
	I0429 19:26:27.805323   14108 node_ready.go:53] node "ha-513500-m02" has status "Ready":"False"
	I0429 19:26:27.947326   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:27.947326   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:27.947326   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:27.947326   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:27.952626   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:28.438670   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:28.438670   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:28.438670   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:28.438670   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:28.444472   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:28.945844   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:28.945844   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:28.945844   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:28.945969   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:28.974028   14108 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0429 19:26:29.440738   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:29.441035   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:29.441035   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:29.441035   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:29.446433   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:29.934012   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:29.934012   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:29.934012   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:29.934012   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:29.940479   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:29.941450   14108 node_ready.go:53] node "ha-513500-m02" has status "Ready":"False"
	I0429 19:26:30.445672   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:30.468037   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:30.468037   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:30.468037   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:30.473265   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:30.935997   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:30.936108   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:30.936108   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:30.936108   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:30.940845   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:31.441278   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:31.441365   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:31.441365   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:31.441365   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:31.446106   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:31.947541   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:31.947541   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:31.947541   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:31.947541   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:31.953177   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:31.954702   14108 node_ready.go:53] node "ha-513500-m02" has status "Ready":"False"
	I0429 19:26:32.434284   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:32.434630   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:32.434630   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:32.434703   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:32.440350   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:32.948445   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:32.948445   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:32.948445   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:32.948445   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:32.954059   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:33.435971   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:33.435971   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:33.435971   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:33.435971   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:33.440574   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:33.945943   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:33.945943   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:33.945943   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:33.945943   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:33.952625   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:34.434132   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:34.434132   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:34.434132   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:34.434132   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:34.443981   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:26:34.444993   14108 node_ready.go:53] node "ha-513500-m02" has status "Ready":"False"
	I0429 19:26:34.935559   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:34.935882   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:34.935882   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:34.935882   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:34.941936   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:35.436593   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:35.436593   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:35.436593   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:35.436593   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:35.442480   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:35.936507   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:35.936507   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:35.936683   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:35.936683   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:35.941247   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:36.439202   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:36.439202   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:36.439202   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:36.439202   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:36.446067   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:36.446977   14108 node_ready.go:53] node "ha-513500-m02" has status "Ready":"False"
	I0429 19:26:36.939470   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:36.939470   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:36.939470   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:36.939470   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:36.944469   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.446898   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:37.446898   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.446898   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.446898   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.452445   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:37.453450   14108 node_ready.go:49] node "ha-513500-m02" has status "Ready":"True"
	I0429 19:26:37.453450   14108 node_ready.go:38] duration metric: took 12.0197862s for node "ha-513500-m02" to be "Ready" ...
	I0429 19:26:37.453450   14108 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:26:37.454056   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:26:37.454056   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.454056   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.454056   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.461346   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:26:37.472695   14108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.472695   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5jxcm
	I0429 19:26:37.472695   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.473243   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.473243   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.477024   14108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:26:37.478550   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:37.478626   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.478626   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.478626   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.483051   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.484036   14108 pod_ready.go:92] pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:37.484159   14108 pod_ready.go:81] duration metric: took 11.4647ms for pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.484159   14108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.484300   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n22jn
	I0429 19:26:37.484300   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.484300   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.484300   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.489039   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.489825   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:37.489873   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.489873   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.489873   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.493874   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.495021   14108 pod_ready.go:92] pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:37.495021   14108 pod_ready.go:81] duration metric: took 10.8612ms for pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.495021   14108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.495274   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500
	I0429 19:26:37.495274   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.495274   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.495274   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.500250   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.501214   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:37.501214   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.501214   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.501214   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.506208   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.507408   14108 pod_ready.go:92] pod "etcd-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:37.507408   14108 pod_ready.go:81] duration metric: took 12.3868ms for pod "etcd-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.507408   14108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.507408   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m02
	I0429 19:26:37.507408   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.507408   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.507408   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.513055   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:37.514257   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:37.514257   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.514828   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.514828   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.518524   14108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:26:37.518524   14108 pod_ready.go:92] pod "etcd-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:37.518524   14108 pod_ready.go:81] duration metric: took 11.1166ms for pod "etcd-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.518524   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.651990   14108 request.go:629] Waited for 132.4625ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500
	I0429 19:26:37.651990   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500
	I0429 19:26:37.651990   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.651990   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.651990   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.657091   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:37.853774   14108 request.go:629] Waited for 195.5556ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:37.853906   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:37.853906   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.853906   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.853906   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.863490   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:26:37.864990   14108 pod_ready.go:92] pod "kube-apiserver-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:37.864990   14108 pod_ready.go:81] duration metric: took 346.4633ms for pod "kube-apiserver-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.865059   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:38.057285   14108 request.go:629] Waited for 191.959ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m02
	I0429 19:26:38.057678   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m02
	I0429 19:26:38.057678   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:38.057678   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:38.057678   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:38.063079   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:38.261121   14108 request.go:629] Waited for 197.0278ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:38.261304   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:38.261418   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:38.261418   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:38.261418   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:38.268989   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:26:38.449708   14108 request.go:629] Waited for 77.4875ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m02
	I0429 19:26:38.449897   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m02
	I0429 19:26:38.449897   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:38.449897   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:38.449897   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:38.460288   14108 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 19:26:38.655184   14108 request.go:629] Waited for 193.5814ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:38.655184   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:38.655414   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:38.655414   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:38.655414   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:38.660756   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:38.662036   14108 pod_ready.go:92] pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:38.662036   14108 pod_ready.go:81] duration metric: took 796.9708ms for pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:38.662036   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:38.858545   14108 request.go:629] Waited for 196.3947ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500
	I0429 19:26:38.858788   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500
	I0429 19:26:38.858788   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:38.858788   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:38.858908   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:38.865933   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:26:39.061577   14108 request.go:629] Waited for 194.158ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:39.061852   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:39.061852   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:39.061852   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:39.061957   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:39.066570   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:39.068303   14108 pod_ready.go:92] pod "kube-controller-manager-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:39.068303   14108 pod_ready.go:81] duration metric: took 406.2641ms for pod "kube-controller-manager-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:39.068303   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:39.249180   14108 request.go:629] Waited for 180.8756ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m02
	I0429 19:26:39.249180   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m02
	I0429 19:26:39.249180   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:39.249180   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:39.249180   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:39.255859   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:39.450776   14108 request.go:629] Waited for 193.3527ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:39.451345   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:39.451345   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:39.451406   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:39.451406   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:39.456069   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:39.457572   14108 pod_ready.go:92] pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:39.457572   14108 pod_ready.go:81] duration metric: took 389.2653ms for pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:39.457572   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4l6c" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:39.657274   14108 request.go:629] Waited for 199.5567ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4l6c
	I0429 19:26:39.657517   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4l6c
	I0429 19:26:39.657517   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:39.657614   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:39.657614   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:39.664334   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:39.858096   14108 request.go:629] Waited for 192.9683ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:39.858380   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:39.858380   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:39.858380   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:39.858380   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:39.863981   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:39.865649   14108 pod_ready.go:92] pod "kube-proxy-k4l6c" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:39.865705   14108 pod_ready.go:81] duration metric: took 408.1299ms for pod "kube-proxy-k4l6c" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:39.865705   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tm7tv" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:40.061213   14108 request.go:629] Waited for 195.3467ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tm7tv
	I0429 19:26:40.061213   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tm7tv
	I0429 19:26:40.061213   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:40.061213   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:40.061213   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:40.067306   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:40.249480   14108 request.go:629] Waited for 180.7548ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:40.249480   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:40.249797   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:40.249797   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:40.249797   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:40.259251   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:26:40.260923   14108 pod_ready.go:92] pod "kube-proxy-tm7tv" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:40.260923   14108 pod_ready.go:81] duration metric: took 395.2154ms for pod "kube-proxy-tm7tv" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:40.261005   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:40.452449   14108 request.go:629] Waited for 191.154ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500
	I0429 19:26:40.452640   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500
	I0429 19:26:40.452640   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:40.452640   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:40.452640   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:40.464050   14108 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 19:26:40.658842   14108 request.go:629] Waited for 193.3797ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:40.658842   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:40.658842   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:40.658842   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:40.658842   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:40.664927   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:40.665543   14108 pod_ready.go:92] pod "kube-scheduler-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:40.665674   14108 pod_ready.go:81] duration metric: took 404.6656ms for pod "kube-scheduler-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:40.665674   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:40.849847   14108 request.go:629] Waited for 183.9296ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m02
	I0429 19:26:40.850120   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m02
	I0429 19:26:40.850120   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:40.850120   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:40.850179   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:40.855184   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:41.053307   14108 request.go:629] Waited for 195.9752ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:41.053648   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:41.053648   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.053712   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.053712   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.059143   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:41.060226   14108 pod_ready.go:92] pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:41.060226   14108 pod_ready.go:81] duration metric: took 394.5489ms for pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:41.060226   14108 pod_ready.go:38] duration metric: took 3.6067471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:26:41.060376   14108 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:26:41.074029   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:26:41.111099   14108 api_server.go:72] duration metric: took 16.2131597s to wait for apiserver process to appear ...
	I0429 19:26:41.111099   14108 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:26:41.111099   14108 api_server.go:253] Checking apiserver healthz at https://172.17.240.42:8443/healthz ...
	I0429 19:26:41.119115   14108 api_server.go:279] https://172.17.240.42:8443/healthz returned 200:
	ok
	I0429 19:26:41.119197   14108 round_trippers.go:463] GET https://172.17.240.42:8443/version
	I0429 19:26:41.119352   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.119352   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.119352   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.121164   14108 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 19:26:41.121536   14108 api_server.go:141] control plane version: v1.30.0
	I0429 19:26:41.121660   14108 api_server.go:131] duration metric: took 10.5607ms to wait for apiserver health ...
	I0429 19:26:41.121660   14108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:26:41.257729   14108 request.go:629] Waited for 135.9879ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:26:41.257729   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:26:41.257729   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.257729   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.257729   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.267602   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:26:41.273611   14108 system_pods.go:59] 17 kube-system pods found
	I0429 19:26:41.273611   14108 system_pods.go:61] "coredns-7db6d8ff4d-5jxcm" [37ba2046-4273-4570-87af-2cc6d03ca54a] Running
	I0429 19:26:41.273611   14108 system_pods.go:61] "coredns-7db6d8ff4d-n22jn" [053e60b3-41d0-4923-9655-02d7dacd691f] Running
	I0429 19:26:41.273611   14108 system_pods.go:61] "etcd-ha-513500" [63f6504e-f824-4c6d-afb9-92ed2f0457cd] Running
	I0429 19:26:41.273611   14108 system_pods.go:61] "etcd-ha-513500-m02" [2d63d157-843e-4750-b4b0-cfa577e7c8a1] Running
	I0429 19:26:41.273611   14108 system_pods.go:61] "kindnet-9w6qr" [eb7641e9-6df3-4b9f-b78c-e251de8ebf78] Running
	I0429 19:26:41.273611   14108 system_pods.go:61] "kindnet-kdpql" [da068cd7-8925-45ed-a5a4-ff2db9d08bd8] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-apiserver-ha-513500" [e7a880e7-5218-4bde-9d62-532836751bbe] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-apiserver-ha-513500-m02" [52c1e20c-27a1-47d2-8405-4537727dac35] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-controller-manager-ha-513500" [bcf915a3-542c-422a-815b-823254b624ff] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-controller-manager-ha-513500-m02" [bc495cfd-bf88-4ef8-b33c-d252f4d9a717] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-proxy-k4l6c" [2c1fff7e-2f97-497a-b6b6-0fcb6e2fcea6] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-proxy-tm7tv" [b4ba7f26-253c-4c1c-83f4-7251a2ad14d4] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-scheduler-ha-513500" [76e5a3e9-d895-406a-ad12-cbaa48b4c52d] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-scheduler-ha-513500-m02" [643c27a0-ca4d-499d-abd7-99aa504580cb] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-vip-ha-513500" [bf461c57-113c-4b7b-987e-04dcc8c13373] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-vip-ha-513500-m02" [76f42a60-c769-42fe-ab90-963fe0ec3489] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "storage-provisioner" [6a5df654-f7da-40f4-a05f-acf47aa779a1] Running
	I0429 19:26:41.274630   14108 system_pods.go:74] duration metric: took 152.8889ms to wait for pod list to return data ...
	I0429 19:26:41.274630   14108 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:26:41.448475   14108 request.go:629] Waited for 173.8436ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:26:41.448475   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:26:41.448475   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.448475   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.448475   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.456499   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:26:41.456499   14108 default_sa.go:45] found service account: "default"
	I0429 19:26:41.456499   14108 default_sa.go:55] duration metric: took 181.8671ms for default service account to be created ...
	I0429 19:26:41.456499   14108 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:26:41.651815   14108 request.go:629] Waited for 195.3146ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:26:41.652470   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:26:41.652470   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.652519   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.652519   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.660525   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:26:41.667582   14108 system_pods.go:86] 17 kube-system pods found
	I0429 19:26:41.667582   14108 system_pods.go:89] "coredns-7db6d8ff4d-5jxcm" [37ba2046-4273-4570-87af-2cc6d03ca54a] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "coredns-7db6d8ff4d-n22jn" [053e60b3-41d0-4923-9655-02d7dacd691f] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "etcd-ha-513500" [63f6504e-f824-4c6d-afb9-92ed2f0457cd] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "etcd-ha-513500-m02" [2d63d157-843e-4750-b4b0-cfa577e7c8a1] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kindnet-9w6qr" [eb7641e9-6df3-4b9f-b78c-e251de8ebf78] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kindnet-kdpql" [da068cd7-8925-45ed-a5a4-ff2db9d08bd8] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-apiserver-ha-513500" [e7a880e7-5218-4bde-9d62-532836751bbe] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-apiserver-ha-513500-m02" [52c1e20c-27a1-47d2-8405-4537727dac35] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-controller-manager-ha-513500" [bcf915a3-542c-422a-815b-823254b624ff] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-controller-manager-ha-513500-m02" [bc495cfd-bf88-4ef8-b33c-d252f4d9a717] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-proxy-k4l6c" [2c1fff7e-2f97-497a-b6b6-0fcb6e2fcea6] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-proxy-tm7tv" [b4ba7f26-253c-4c1c-83f4-7251a2ad14d4] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-scheduler-ha-513500" [76e5a3e9-d895-406a-ad12-cbaa48b4c52d] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-scheduler-ha-513500-m02" [643c27a0-ca4d-499d-abd7-99aa504580cb] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-vip-ha-513500" [bf461c57-113c-4b7b-987e-04dcc8c13373] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-vip-ha-513500-m02" [76f42a60-c769-42fe-ab90-963fe0ec3489] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "storage-provisioner" [6a5df654-f7da-40f4-a05f-acf47aa779a1] Running
	I0429 19:26:41.667582   14108 system_pods.go:126] duration metric: took 211.0813ms to wait for k8s-apps to be running ...
	I0429 19:26:41.667582   14108 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:26:41.681069   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:26:41.719796   14108 system_svc.go:56] duration metric: took 52.2138ms WaitForService to wait for kubelet
	I0429 19:26:41.719849   14108 kubeadm.go:576] duration metric: took 16.8219042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:26:41.719896   14108 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:26:41.855108   14108 request.go:629] Waited for 135.1454ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes
	I0429 19:26:41.855297   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes
	I0429 19:26:41.855297   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.855297   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.855297   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.864007   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:26:41.864580   14108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:26:41.864580   14108 node_conditions.go:123] node cpu capacity is 2
	I0429 19:26:41.864580   14108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:26:41.864580   14108 node_conditions.go:123] node cpu capacity is 2
	I0429 19:26:41.864580   14108 node_conditions.go:105] duration metric: took 144.6836ms to run NodePressure ...
	I0429 19:26:41.864580   14108 start.go:240] waiting for startup goroutines ...
	I0429 19:26:41.865109   14108 start.go:254] writing updated cluster config ...
	I0429 19:26:41.870020   14108 out.go:177] 
	I0429 19:26:41.883723   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:26:41.883723   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:26:41.890659   14108 out.go:177] * Starting "ha-513500-m03" control-plane node in "ha-513500" cluster
	I0429 19:26:41.893914   14108 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 19:26:41.893914   14108 cache.go:56] Caching tarball of preloaded images
	I0429 19:26:41.894652   14108 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 19:26:41.894652   14108 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 19:26:41.894652   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:26:41.899332   14108 start.go:360] acquireMachinesLock for ha-513500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:26:41.899332   14108 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-513500-m03"
	I0429 19:26:41.899332   14108 start.go:93] Provisioning new machine with config: &{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.247.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:26:41.900343   14108 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0429 19:26:41.903345   14108 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 19:26:41.904542   14108 start.go:159] libmachine.API.Create for "ha-513500" (driver="hyperv")
	I0429 19:26:41.904602   14108 client.go:168] LocalClient.Create starting
	I0429 19:26:41.904884   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 19:26:41.904884   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:26:41.904884   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:26:41.905478   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 19:26:41.905478   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:26:41.905478   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:26:41.905478   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 19:26:43.905198   14108 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 19:26:43.905275   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:43.905275   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 19:26:45.718186   14108 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 19:26:45.718186   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:45.718186   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:26:47.302616   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:26:47.302616   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:47.302616   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:26:51.058527   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:26:51.058527   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:51.061089   14108 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:26:51.579004   14108 main.go:141] libmachine: Creating SSH key...
	I0429 19:26:51.756997   14108 main.go:141] libmachine: Creating VM...
	I0429 19:26:51.756997   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:26:54.790733   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:26:54.790733   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:54.791618   14108 main.go:141] libmachine: Using switch "Default Switch"
	I0429 19:26:54.791618   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:26:56.642998   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:26:56.643325   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:56.643325   14108 main.go:141] libmachine: Creating VHD
	I0429 19:26:56.643325   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 19:27:00.410847   14108 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 65B5D04D-688D-4E5B-904B-7E141F51FF8F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 19:27:00.410847   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:00.410847   14108 main.go:141] libmachine: Writing magic tar header
	I0429 19:27:00.410847   14108 main.go:141] libmachine: Writing SSH key tar header
	I0429 19:27:00.421665   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 19:27:03.631607   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:03.631607   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:03.631884   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\disk.vhd' -SizeBytes 20000MB
	I0429 19:27:06.201398   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:06.202341   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:06.202457   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-513500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 19:27:09.990901   14108 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-513500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 19:27:09.990998   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:09.990998   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-513500-m03 -DynamicMemoryEnabled $false
	I0429 19:27:12.223991   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:12.224444   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:12.224444   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-513500-m03 -Count 2
	I0429 19:27:14.412192   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:14.412192   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:14.412192   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-513500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\boot2docker.iso'
	I0429 19:27:17.051395   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:17.051395   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:17.051395   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-513500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\disk.vhd'
	I0429 19:27:19.772572   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:19.772572   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:19.772572   14108 main.go:141] libmachine: Starting VM...
	I0429 19:27:19.773468   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-513500-m03
	I0429 19:27:22.989368   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:22.989368   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:22.989368   14108 main.go:141] libmachine: Waiting for host to start...
	I0429 19:27:22.989368   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:25.360543   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:25.360543   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:25.360543   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:27.937259   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:27.937585   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:28.944386   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:31.209567   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:31.209567   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:31.209567   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:33.868379   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:33.868379   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:34.876137   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:37.088703   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:37.088915   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:37.088915   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:39.681856   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:39.682577   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:40.689299   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:42.895504   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:42.895504   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:42.896539   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:45.447085   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:45.447623   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:46.451433   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:48.687031   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:48.688025   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:48.688077   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:51.375019   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:27:51.375019   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:51.375665   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:53.574792   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:53.574792   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:53.574792   14108 machine.go:94] provisionDockerMachine start ...
	I0429 19:27:53.574792   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:55.756313   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:55.756385   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:55.756385   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:58.393798   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:27:58.393798   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:58.404436   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:27:58.416373   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:27:58.416621   14108 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:27:58.553426   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 19:27:58.553426   14108 buildroot.go:166] provisioning hostname "ha-513500-m03"
	I0429 19:27:58.553594   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:00.720275   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:00.720275   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:00.720555   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:03.382865   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:03.382923   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:03.389210   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:03.389959   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:03.389959   14108 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-513500-m03 && echo "ha-513500-m03" | sudo tee /etc/hostname
	I0429 19:28:03.560439   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-513500-m03
	
	I0429 19:28:03.560439   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:05.733256   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:05.733786   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:05.733786   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:08.407301   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:08.407611   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:08.414362   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:08.414504   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:08.414504   14108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:28:08.571230   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:28:08.571230   14108 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 19:28:08.571230   14108 buildroot.go:174] setting up certificates
	I0429 19:28:08.571230   14108 provision.go:84] configureAuth start
	I0429 19:28:08.571230   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:10.741881   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:10.741881   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:10.741881   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:13.368222   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:13.368222   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:13.368789   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:15.515159   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:15.515407   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:15.515407   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:18.146798   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:18.147733   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:18.147808   14108 provision.go:143] copyHostCerts
	I0429 19:28:18.147918   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 19:28:18.148374   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 19:28:18.148374   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 19:28:18.148855   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 19:28:18.150108   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 19:28:18.150389   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 19:28:18.150389   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 19:28:18.150722   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 19:28:18.151800   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 19:28:18.152041   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 19:28:18.152041   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 19:28:18.152438   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 19:28:18.153595   14108 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-513500-m03 san=[127.0.0.1 172.17.246.101 ha-513500-m03 localhost minikube]
	I0429 19:28:18.526406   14108 provision.go:177] copyRemoteCerts
	I0429 19:28:18.539553   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:28:18.539553   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:20.713666   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:20.714647   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:20.714759   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:23.345679   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:23.345679   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:23.346755   14108 sshutil.go:53] new ssh client: &{IP:172.17.246.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\id_rsa Username:docker}
	I0429 19:28:23.458156   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9185641s)
	I0429 19:28:23.458325   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 19:28:23.458791   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 19:28:23.510697   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 19:28:23.510697   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 19:28:23.561556   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 19:28:23.561556   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:28:23.610643   14108 provision.go:87] duration metric: took 15.039294s to configureAuth
	I0429 19:28:23.610643   14108 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:28:23.611636   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:28:23.611636   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:25.734124   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:25.734885   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:25.734885   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:28.356029   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:28.356029   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:28.362740   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:28.363240   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:28.363240   14108 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 19:28:28.495663   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 19:28:28.495735   14108 buildroot.go:70] root file system type: tmpfs
	I0429 19:28:28.495948   14108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 19:28:28.495948   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:30.648272   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:30.649140   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:30.649140   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:33.261658   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:33.262017   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:33.270622   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:33.271358   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:33.271358   14108 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.240.42"
	Environment="NO_PROXY=172.17.240.42,172.17.247.146"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 19:28:33.441690   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.240.42
	Environment=NO_PROXY=172.17.240.42,172.17.247.146
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 19:28:33.441942   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:35.626105   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:35.626105   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:35.626105   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:38.254611   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:38.255222   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:38.261588   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:38.262134   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:38.262390   14108 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 19:28:40.501151   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 19:28:40.501228   14108 machine.go:97] duration metric: took 46.9260654s to provisionDockerMachine
	I0429 19:28:40.501228   14108 client.go:171] duration metric: took 1m58.5956941s to LocalClient.Create
	I0429 19:28:40.501228   14108 start.go:167] duration metric: took 1m58.5959633s to libmachine.API.Create "ha-513500"
	I0429 19:28:40.501228   14108 start.go:293] postStartSetup for "ha-513500-m03" (driver="hyperv")
	I0429 19:28:40.501228   14108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:28:40.515626   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:28:40.515626   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:42.655830   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:42.655830   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:42.655830   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:45.238660   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:45.239622   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:45.240190   14108 sshutil.go:53] new ssh client: &{IP:172.17.246.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\id_rsa Username:docker}
	I0429 19:28:45.350234   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8345198s)
	I0429 19:28:45.364856   14108 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:28:45.372269   14108 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:28:45.372269   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 19:28:45.372907   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 19:28:45.373879   14108 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 19:28:45.373879   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 19:28:45.387882   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:28:45.409649   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 19:28:45.468482   14108 start.go:296] duration metric: took 4.966989s for postStartSetup
	I0429 19:28:45.472723   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:47.629572   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:47.629572   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:47.629821   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:50.300143   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:50.300143   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:50.300438   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:28:50.302686   14108 start.go:128] duration metric: took 2m8.4013339s to createHost
	I0429 19:28:50.302980   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:52.492330   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:52.492330   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:52.492865   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:55.101854   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:55.101854   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:55.111485   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:55.112230   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:55.112230   14108 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:28:55.239417   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418935.253259815
	
	I0429 19:28:55.239417   14108 fix.go:216] guest clock: 1714418935.253259815
	I0429 19:28:55.239417   14108 fix.go:229] Guest: 2024-04-29 19:28:55.253259815 +0000 UTC Remote: 2024-04-29 19:28:50.3029808 +0000 UTC m=+580.231871601 (delta=4.950279015s)
	I0429 19:28:55.239574   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:57.414202   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:57.415076   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:57.415139   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:00.053729   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:29:00.053954   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:00.060609   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:29:00.060799   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:29:00.060799   14108 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714418935
	I0429 19:29:00.216429   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 19:28:55 UTC 2024
	
	I0429 19:29:00.216502   14108 fix.go:236] clock set: Mon Apr 29 19:28:55 UTC 2024
	 (err=<nil>)
	I0429 19:29:00.216502   14108 start.go:83] releasing machines lock for "ha-513500-m03", held for 2m18.316083s
	I0429 19:29:00.216864   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:29:02.387914   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:02.388665   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:02.388665   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:05.025197   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:29:05.025197   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:05.030939   14108 out.go:177] * Found network options:
	I0429 19:29:05.033920   14108 out.go:177]   - NO_PROXY=172.17.240.42,172.17.247.146
	W0429 19:29:05.035981   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:29:05.035981   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:29:05.039114   14108 out.go:177]   - NO_PROXY=172.17.240.42,172.17.247.146
	W0429 19:29:05.043378   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:29:05.043378   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:29:05.044574   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:29:05.044574   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:29:05.047762   14108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:29:05.047762   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:29:05.060966   14108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 19:29:05.061156   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:29:07.267963   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:07.267963   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:07.267963   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:07.271668   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:07.271668   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:07.271668   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:10.035824   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:29:10.035824   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:10.035936   14108 sshutil.go:53] new ssh client: &{IP:172.17.246.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\id_rsa Username:docker}
	I0429 19:29:10.064588   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:29:10.064588   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:10.065386   14108 sshutil.go:53] new ssh client: &{IP:172.17.246.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\id_rsa Username:docker}
	I0429 19:29:10.141652   14108 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0805372s)
	W0429 19:29:10.141770   14108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:29:10.155424   14108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:29:10.344285   14108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:29:10.344428   14108 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2966249s)
	I0429 19:29:10.344428   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:29:10.344664   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:29:10.405048   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 19:29:10.444352   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 19:29:10.484000   14108 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 19:29:10.498919   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 19:29:10.543700   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:29:10.583833   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 19:29:10.626125   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:29:10.663682   14108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:29:10.701840   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 19:29:10.738655   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 19:29:10.777749   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 19:29:10.815018   14108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:29:10.852594   14108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:29:10.904663   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:29:11.137529   14108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 19:29:11.175286   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:29:11.188862   14108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 19:29:11.232328   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:29:11.273413   14108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:29:11.325915   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:29:11.368347   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:29:11.412321   14108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 19:29:11.475213   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:29:11.509289   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:29:11.562719   14108 ssh_runner.go:195] Run: which cri-dockerd
	I0429 19:29:11.582998   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 19:29:11.607043   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 19:29:11.654995   14108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 19:29:11.878372   14108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 19:29:12.095804   14108 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 19:29:12.095943   14108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 19:29:12.148665   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:29:12.369350   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 19:29:14.935311   14108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5654558s)
	I0429 19:29:14.949001   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 19:29:14.990118   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:29:15.035287   14108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 19:29:15.265160   14108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 19:29:15.522681   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:29:15.777420   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 19:29:15.831031   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:29:15.875382   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:29:16.108088   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 19:29:16.230861   14108 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 19:29:16.243985   14108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 19:29:16.253737   14108 start.go:562] Will wait 60s for crictl version
	I0429 19:29:16.271546   14108 ssh_runner.go:195] Run: which crictl
	I0429 19:29:16.297067   14108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:29:16.358824   14108 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 19:29:16.370221   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:29:16.417463   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:29:16.454510   14108 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 19:29:16.457513   14108 out.go:177]   - env NO_PROXY=172.17.240.42
	I0429 19:29:16.460521   14108 out.go:177]   - env NO_PROXY=172.17.240.42,172.17.247.146
	I0429 19:29:16.462522   14108 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 19:29:16.466524   14108 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 19:29:16.466524   14108 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 19:29:16.466524   14108 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 19:29:16.466524   14108 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 19:29:16.469508   14108 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 19:29:16.469508   14108 ip.go:210] interface addr: 172.17.240.1/20
	I0429 19:29:16.481511   14108 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 19:29:16.488516   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:29:16.510549   14108 mustload.go:65] Loading cluster: ha-513500
	I0429 19:29:16.512250   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:29:16.512810   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:29:18.676610   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:18.676894   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:18.677008   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:29:18.677862   14108 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500 for IP: 172.17.246.101
	I0429 19:29:18.677862   14108 certs.go:194] generating shared ca certs ...
	I0429 19:29:18.677862   14108 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:29:18.678402   14108 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 19:29:18.678804   14108 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 19:29:18.678860   14108 certs.go:256] generating profile certs ...
	I0429 19:29:18.679506   14108 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.key
	I0429 19:29:18.679677   14108 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.3dac6f02
	I0429 19:29:18.679677   14108 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.3dac6f02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.240.42 172.17.247.146 172.17.246.101 172.17.255.254]
	I0429 19:29:19.188832   14108 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.3dac6f02 ...
	I0429 19:29:19.188832   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.3dac6f02: {Name:mka4aa4e7b09d84005f0f01ff2299a91be08baaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:29:19.190990   14108 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.3dac6f02 ...
	I0429 19:29:19.190990   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.3dac6f02: {Name:mk1974955d4f3ba88d7af5fedd95e2cb2387b0f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:29:19.190990   14108 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.3dac6f02 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt
	I0429 19:29:19.203706   14108 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.3dac6f02 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key
	I0429 19:29:19.204038   14108 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key
	I0429 19:29:19.204038   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:29:19.205109   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:29:19.205160   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:29:19.205456   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:29:19.205624   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:29:19.205804   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:29:19.206020   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:29:19.206509   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:29:19.207117   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 19:29:19.207468   14108 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 19:29:19.207726   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 19:29:19.208007   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 19:29:19.208279   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 19:29:19.208474   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 19:29:19.209007   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 19:29:19.209471   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:29:19.209640   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 19:29:19.210065   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 19:29:19.210270   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:29:21.402532   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:21.402775   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:21.402869   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:24.010577   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:29:24.010577   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:24.010577   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:29:24.114529   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 19:29:24.126107   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 19:29:24.165966   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 19:29:24.174800   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0429 19:29:24.211164   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 19:29:24.219415   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 19:29:24.257944   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 19:29:24.265603   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 19:29:24.298966   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 19:29:24.307110   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 19:29:24.340456   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 19:29:24.349190   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 19:29:24.374920   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:29:24.430360   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 19:29:24.481626   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:29:24.534280   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:29:24.588062   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0429 19:29:24.643850   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 19:29:24.693852   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:29:24.742831   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 19:29:24.791835   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:29:24.840237   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 19:29:24.889792   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 19:29:24.944499   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 19:29:24.980398   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0429 19:29:25.015400   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 19:29:25.049654   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 19:29:25.079788   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 19:29:25.129138   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 19:29:25.170628   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 19:29:25.221738   14108 ssh_runner.go:195] Run: openssl version
	I0429 19:29:25.243675   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 19:29:25.280878   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 19:29:25.288719   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 19:29:25.301984   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 19:29:25.325715   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:29:25.361498   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:29:25.399777   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:29:25.410381   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:29:25.423889   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:29:25.450461   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:29:25.486853   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 19:29:25.523544   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 19:29:25.531455   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 19:29:25.545658   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 19:29:25.569870   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 19:29:25.606592   14108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:29:25.613588   14108 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:29:25.614123   14108 kubeadm.go:928] updating node {m03 172.17.246.101 8443 v1.30.0 docker true true} ...
	I0429 19:29:25.614394   14108 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-513500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.246.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:29:25.614464   14108 kube-vip.go:115] generating kube-vip config ...
	I0429 19:29:25.627663   14108 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:29:25.657517   14108 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:29:25.657602   14108 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:29:25.671285   14108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:29:25.689428   14108 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 19:29:25.702607   14108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 19:29:25.722166   14108 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 19:29:25.722780   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:29:25.722780   14108 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 19:29:25.722166   14108 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 19:29:25.722904   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:29:25.738055   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:29:25.739049   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:29:25.741358   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:29:25.745571   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 19:29:25.745571   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 19:29:25.748317   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 19:29:25.748317   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 19:29:25.787618   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:29:25.801527   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:29:25.937194   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 19:29:25.937255   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 19:29:27.082928   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 19:29:27.107187   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0429 19:29:27.147218   14108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:29:27.183592   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0429 19:29:27.239763   14108 ssh_runner.go:195] Run: grep 172.17.255.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:29:27.247517   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:29:27.289581   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:29:27.534325   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:29:27.570536   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:29:27.571366   14108 start.go:316] joinCluster: &{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.247.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.246.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:29:27.571699   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 19:29:27.571755   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:29:29.752091   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:29.752871   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:29.752871   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:32.436091   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:29:32.436091   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:32.436905   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:29:32.656075   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0843358s)
	I0429 19:29:32.656075   14108 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.246.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:29:32.656075   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5yaa43.ds6bmbjti0klyjf6 --discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-513500-m03 --control-plane --apiserver-advertise-address=172.17.246.101 --apiserver-bind-port=8443"
	I0429 19:30:18.713293   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5yaa43.ds6bmbjti0klyjf6 --discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-513500-m03 --control-plane --apiserver-advertise-address=172.17.246.101 --apiserver-bind-port=8443": (46.0568563s)
	I0429 19:30:18.713293   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 19:30:19.956534   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.2432312s)
	I0429 19:30:19.977503   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-513500-m03 minikube.k8s.io/updated_at=2024_04_29T19_30_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-513500 minikube.k8s.io/primary=false
	I0429 19:30:20.182454   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-513500-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 19:30:20.356810   14108 start.go:318] duration metric: took 52.7849751s to joinCluster
	I0429 19:30:20.356990   14108 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.17.246.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:30:20.357787   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:30:20.360204   14108 out.go:177] * Verifying Kubernetes components...
	I0429 19:30:20.376812   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:30:20.854955   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:30:20.888703   14108 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:30:20.889465   14108 kapi.go:59] client config for ha-513500: &rest.Config{Host:"https://172.17.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 19:30:20.889637   14108 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.255.254:8443 with https://172.17.240.42:8443
	I0429 19:30:20.890559   14108 node_ready.go:35] waiting up to 6m0s for node "ha-513500-m03" to be "Ready" ...
	I0429 19:30:20.890696   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:20.890780   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:20.890780   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:20.890780   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:20.907725   14108 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 19:30:21.398190   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:21.398190   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:21.398190   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:21.398190   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:21.404190   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:21.905372   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:21.905629   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:21.905629   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:21.905629   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:21.909907   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:22.396254   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:22.396254   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:22.396254   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:22.396492   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:22.401542   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:22.891138   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:22.891138   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:22.891138   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:22.891138   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:22.895653   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:22.895653   14108 node_ready.go:53] node "ha-513500-m03" has status "Ready":"False"
	I0429 19:30:23.398162   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:23.398281   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:23.398281   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:23.398281   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:23.408428   14108 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 19:30:23.905530   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:23.905767   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:23.905767   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:23.905767   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:23.933106   14108 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0429 19:30:24.395628   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:24.395628   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:24.395628   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:24.395628   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:24.402615   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:24.897788   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:24.897788   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:24.897861   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:24.897861   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:24.902456   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:24.904110   14108 node_ready.go:53] node "ha-513500-m03" has status "Ready":"False"
	I0429 19:30:25.402078   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:25.402449   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:25.402449   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:25.402449   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:25.407728   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:25.906207   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:25.906311   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:25.906311   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:25.906311   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:25.911706   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:26.391822   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:26.392131   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.392131   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.392131   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.397215   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:26.399353   14108 node_ready.go:49] node "ha-513500-m03" has status "Ready":"True"
	I0429 19:30:26.399353   14108 node_ready.go:38] duration metric: took 5.5087511s for node "ha-513500-m03" to be "Ready" ...
	I0429 19:30:26.399445   14108 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:30:26.399542   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:30:26.399615   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.399615   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.399615   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.413864   14108 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 19:30:26.429152   14108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.429152   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5jxcm
	I0429 19:30:26.429152   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.429152   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.429152   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.433833   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:26.434695   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:26.434695   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.434809   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.434809   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.443755   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:30:26.444917   14108 pod_ready.go:92] pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:26.444917   14108 pod_ready.go:81] duration metric: took 15.7653ms for pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.445066   14108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.445133   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n22jn
	I0429 19:30:26.445133   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.445133   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.445133   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.451205   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:26.451205   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:26.452203   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.452322   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.452322   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.457633   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:26.458361   14108 pod_ready.go:92] pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:26.458483   14108 pod_ready.go:81] duration metric: took 13.4167ms for pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.458527   14108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.458626   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500
	I0429 19:30:26.458626   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.458626   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.458626   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.462868   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:26.463913   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:26.463913   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.463913   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.463913   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.467879   14108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:30:26.469240   14108 pod_ready.go:92] pod "etcd-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:26.469240   14108 pod_ready.go:81] duration metric: took 10.7127ms for pod "etcd-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.469240   14108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.469240   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m02
	I0429 19:30:26.469240   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.469240   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.469240   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.474875   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:26.475828   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:26.475828   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.475895   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.475895   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.481344   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:26.483364   14108 pod_ready.go:92] pod "etcd-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:26.483364   14108 pod_ready.go:81] duration metric: took 14.124ms for pod "etcd-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.483364   14108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.593659   14108 request.go:629] Waited for 109.755ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:26.593748   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:26.593748   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.593816   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.593816   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.600041   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:26.797194   14108 request.go:629] Waited for 196.2696ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:26.797296   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:26.797296   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.797296   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.797296   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.802252   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:27.001253   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:27.001476   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.001476   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.001476   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.010923   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:30:27.203107   14108 request.go:629] Waited for 191.4334ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:27.203247   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:27.203247   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.203247   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.203247   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.208920   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:27.486744   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:27.486833   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.486833   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.486833   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.493117   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:27.596392   14108 request.go:629] Waited for 101.43ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:27.596621   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:27.596621   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.596621   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.596717   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.601970   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:27.987946   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:27.987946   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.987946   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.987946   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.993797   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:27.994799   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:27.994799   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.994861   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.994861   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.999816   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:28.489251   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:28.489251   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:28.489346   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:28.489346   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:28.498665   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:30:28.499581   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:28.499581   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:28.499581   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:28.499649   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:28.504536   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:28.505406   14108 pod_ready.go:102] pod "etcd-ha-513500-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 19:30:28.991355   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:28.991622   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:28.991622   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:28.991622   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:28.997073   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:28.999298   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:28.999298   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:28.999298   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:28.999298   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:29.004222   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:29.489359   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:29.489793   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:29.489793   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:29.489793   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:29.497456   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:30:29.498751   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:29.498751   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:29.498751   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:29.498751   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:29.503128   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:29.993177   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:29.993421   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:29.993421   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:29.993421   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:29.999450   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:30.000305   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:30.000305   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:30.000305   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:30.000305   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:30.004511   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:30.498031   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:30.498152   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:30.498152   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:30.498152   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:30.505061   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:30.505796   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:30.505796   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:30.505796   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:30.505796   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:30.510247   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:30.512029   14108 pod_ready.go:102] pod "etcd-ha-513500-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 19:30:30.984865   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:30.985049   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:30.985049   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:30.985194   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:30.989983   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:30.991721   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:30.991721   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:30.991721   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:30.991721   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:30.996048   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:31.498504   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:31.498504   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:31.498504   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:31.498504   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:31.507146   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:30:31.509486   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:31.509571   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:31.509571   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:31.509571   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:31.515094   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:31.989369   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:31.989435   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:31.989435   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:31.989435   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:31.993295   14108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:30:31.994850   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:31.994850   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:31.994850   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:31.994850   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:31.999453   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.000849   14108 pod_ready.go:92] pod "etcd-ha-513500-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:32.000849   14108 pod_ready.go:81] duration metric: took 5.5174417s for pod "etcd-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.000849   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.000849   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500
	I0429 19:30:32.000849   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.000849   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.000849   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.006129   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.007185   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:32.007185   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.007185   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.007185   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.016442   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:30:32.016919   14108 pod_ready.go:92] pod "kube-apiserver-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:32.016919   14108 pod_ready.go:81] duration metric: took 16.0696ms for pod "kube-apiserver-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.016919   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.016919   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m02
	I0429 19:30:32.016919   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.016919   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.016919   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.021065   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.021798   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:32.021798   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.021947   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.021947   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.026842   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.027791   14108 pod_ready.go:92] pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:32.027847   14108 pod_ready.go:81] duration metric: took 10.9277ms for pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.027847   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.193363   14108 request.go:629] Waited for 165.1984ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m03
	I0429 19:30:32.193688   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m03
	I0429 19:30:32.193688   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.193688   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.193688   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.198330   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.398365   14108 request.go:629] Waited for 198.6615ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:32.398780   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:32.398780   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.398780   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.398878   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.404851   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:32.406204   14108 pod_ready.go:92] pod "kube-apiserver-ha-513500-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:32.406257   14108 pod_ready.go:81] duration metric: took 378.3546ms for pod "kube-apiserver-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.406257   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.604410   14108 request.go:629] Waited for 197.9082ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500
	I0429 19:30:32.604706   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500
	I0429 19:30:32.604706   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.604706   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.604706   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.610126   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.792486   14108 request.go:629] Waited for 180.2535ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:32.792624   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:32.792624   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.792749   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.792868   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.797406   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.799059   14108 pod_ready.go:92] pod "kube-controller-manager-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:32.799149   14108 pod_ready.go:81] duration metric: took 392.889ms for pod "kube-controller-manager-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.799149   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.995109   14108 request.go:629] Waited for 195.9592ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m02
	I0429 19:30:32.995407   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m02
	I0429 19:30:32.995407   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.995407   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.995407   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:33.000461   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:33.197733   14108 request.go:629] Waited for 195.8575ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:33.197733   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:33.197733   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:33.197733   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:33.197733   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:33.204640   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:33.208432   14108 pod_ready.go:92] pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:33.208491   14108 pod_ready.go:81] duration metric: took 409.28ms for pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:33.208491   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:33.392264   14108 request.go:629] Waited for 183.5379ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m03
	I0429 19:30:33.392336   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m03
	I0429 19:30:33.392532   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:33.392532   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:33.392532   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:33.400076   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:30:33.604918   14108 request.go:629] Waited for 202.2926ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:33.605191   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:33.605191   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:33.605191   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:33.605191   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:33.612448   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:33.613233   14108 pod_ready.go:92] pod "kube-controller-manager-ha-513500-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:33.613335   14108 pod_ready.go:81] duration metric: took 404.8413ms for pod "kube-controller-manager-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:33.613335   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4l6c" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:33.792898   14108 request.go:629] Waited for 179.4829ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4l6c
	I0429 19:30:33.793211   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4l6c
	I0429 19:30:33.793211   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:33.793211   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:33.793211   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:33.798837   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:33.996298   14108 request.go:629] Waited for 196.1575ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:33.996298   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:33.996298   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:33.996298   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:33.996298   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:34.001661   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:34.003249   14108 pod_ready.go:92] pod "kube-proxy-k4l6c" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:34.003249   14108 pod_ready.go:81] duration metric: took 389.9106ms for pod "kube-proxy-k4l6c" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:34.003370   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s7ddt" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:34.202031   14108 request.go:629] Waited for 198.5016ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s7ddt
	I0429 19:30:34.202147   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s7ddt
	I0429 19:30:34.202147   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:34.202147   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:34.202147   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:34.207864   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:34.392151   14108 request.go:629] Waited for 182.9204ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:34.392421   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:34.392421   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:34.392421   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:34.392421   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:34.398839   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:34.399705   14108 pod_ready.go:92] pod "kube-proxy-s7ddt" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:34.399705   14108 pod_ready.go:81] duration metric: took 396.332ms for pod "kube-proxy-s7ddt" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:34.399705   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tm7tv" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:34.598267   14108 request.go:629] Waited for 198.3986ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tm7tv
	I0429 19:30:34.598566   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tm7tv
	I0429 19:30:34.598566   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:34.598566   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:34.598718   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:34.604317   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:34.801963   14108 request.go:629] Waited for 195.7383ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:34.802357   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:34.802357   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:34.802357   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:34.802666   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:34.808483   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:34.809338   14108 pod_ready.go:92] pod "kube-proxy-tm7tv" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:34.809338   14108 pod_ready.go:81] duration metric: took 409.6304ms for pod "kube-proxy-tm7tv" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:34.809338   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.004342   14108 request.go:629] Waited for 194.9182ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500
	I0429 19:30:35.004458   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500
	I0429 19:30:35.004458   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.004458   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.004458   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.010914   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:35.192979   14108 request.go:629] Waited for 182.0641ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:35.193187   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:35.193187   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.193187   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.193187   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.198295   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:35.198976   14108 pod_ready.go:92] pod "kube-scheduler-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:35.199048   14108 pod_ready.go:81] duration metric: took 389.7063ms for pod "kube-scheduler-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.199048   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.396708   14108 request.go:629] Waited for 197.4972ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m02
	I0429 19:30:35.397144   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m02
	I0429 19:30:35.397144   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.397144   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.397231   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.402633   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:35.598540   14108 request.go:629] Waited for 194.6929ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:35.598540   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:35.598540   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.598540   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.598540   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.607719   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:30:35.608401   14108 pod_ready.go:92] pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:35.608807   14108 pod_ready.go:81] duration metric: took 409.756ms for pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.608889   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.801637   14108 request.go:629] Waited for 192.746ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m03
	I0429 19:30:35.801637   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m03
	I0429 19:30:35.801901   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.801966   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.801990   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.807380   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:35.992234   14108 request.go:629] Waited for 183.3531ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:35.992381   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:35.992381   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.992381   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.992440   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.997976   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:35.998678   14108 pod_ready.go:92] pod "kube-scheduler-ha-513500-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:35.998678   14108 pod_ready.go:81] duration metric: took 389.7853ms for pod "kube-scheduler-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.998740   14108 pod_ready.go:38] duration metric: took 9.5992202s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:30:35.998740   14108 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:30:36.013099   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:30:36.040683   14108 api_server.go:72] duration metric: took 15.6835365s to wait for apiserver process to appear ...
	I0429 19:30:36.040683   14108 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:30:36.040683   14108 api_server.go:253] Checking apiserver healthz at https://172.17.240.42:8443/healthz ...
	I0429 19:30:36.048954   14108 api_server.go:279] https://172.17.240.42:8443/healthz returned 200:
	ok
	I0429 19:30:36.049597   14108 round_trippers.go:463] GET https://172.17.240.42:8443/version
	I0429 19:30:36.049597   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:36.049597   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:36.049597   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:36.050910   14108 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 19:30:36.053149   14108 api_server.go:141] control plane version: v1.30.0
	I0429 19:30:36.053149   14108 api_server.go:131] duration metric: took 12.466ms to wait for apiserver health ...
	I0429 19:30:36.053149   14108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:30:36.193835   14108 request.go:629] Waited for 140.6009ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:30:36.194200   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:30:36.194422   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:36.194422   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:36.194422   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:36.204325   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:30:36.215639   14108 system_pods.go:59] 24 kube-system pods found
	I0429 19:30:36.215639   14108 system_pods.go:61] "coredns-7db6d8ff4d-5jxcm" [37ba2046-4273-4570-87af-2cc6d03ca54a] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "coredns-7db6d8ff4d-n22jn" [053e60b3-41d0-4923-9655-02d7dacd691f] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "etcd-ha-513500" [63f6504e-f824-4c6d-afb9-92ed2f0457cd] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "etcd-ha-513500-m02" [2d63d157-843e-4750-b4b0-cfa577e7c8a1] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "etcd-ha-513500-m03" [5d7cba98-84b0-4b25-bbdb-189bf3a926db] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kindnet-9tv8w" [28dad06a-bed9-4b9c-a3b6-df814e1f3d7b] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kindnet-9w6qr" [eb7641e9-6df3-4b9f-b78c-e251de8ebf78] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kindnet-kdpql" [da068cd7-8925-45ed-a5a4-ff2db9d08bd8] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-apiserver-ha-513500" [e7a880e7-5218-4bde-9d62-532836751bbe] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-apiserver-ha-513500-m02" [52c1e20c-27a1-47d2-8405-4537727dac35] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-apiserver-ha-513500-m03" [7780dbcd-ed6c-4283-b93f-c725a0a78994] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-controller-manager-ha-513500" [bcf915a3-542c-422a-815b-823254b624ff] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-controller-manager-ha-513500-m02" [bc495cfd-bf88-4ef8-b33c-d252f4d9a717] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-controller-manager-ha-513500-m03" [a4507291-9f79-4ad9-8331-22ae19067d63] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-proxy-k4l6c" [2c1fff7e-2f97-497a-b6b6-0fcb6e2fcea6] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-proxy-s7ddt" [46edafa6-bc34-47d0-b33e-881bb23d4262] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-proxy-tm7tv" [b4ba7f26-253c-4c1c-83f4-7251a2ad14d4] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-scheduler-ha-513500" [76e5a3e9-d895-406a-ad12-cbaa48b4c52d] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-scheduler-ha-513500-m02" [643c27a0-ca4d-499d-abd7-99aa504580cb] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-scheduler-ha-513500-m03" [d319fcbd-9d28-4fca-b9c8-6a7c64c129c9] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-vip-ha-513500" [bf461c57-113c-4b7b-987e-04dcc8c13373] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-vip-ha-513500-m02" [76f42a60-c769-42fe-ab90-963fe0ec3489] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-vip-ha-513500-m03" [e1568d39-7863-4071-b1b5-66713276b66b] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "storage-provisioner" [6a5df654-f7da-40f4-a05f-acf47aa779a1] Running
	I0429 19:30:36.215639   14108 system_pods.go:74] duration metric: took 162.4886ms to wait for pod list to return data ...
	I0429 19:30:36.215639   14108 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:30:36.398702   14108 request.go:629] Waited for 182.1485ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:30:36.399048   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:30:36.399048   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:36.399048   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:36.399048   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:36.407683   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:30:36.407683   14108 default_sa.go:45] found service account: "default"
	I0429 19:30:36.407683   14108 default_sa.go:55] duration metric: took 191.5138ms for default service account to be created ...
	I0429 19:30:36.407683   14108 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:30:36.602610   14108 request.go:629] Waited for 194.7985ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:30:36.602743   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:30:36.602743   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:36.602743   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:36.602799   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:36.617222   14108 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 19:30:36.627728   14108 system_pods.go:86] 24 kube-system pods found
	I0429 19:30:36.627728   14108 system_pods.go:89] "coredns-7db6d8ff4d-5jxcm" [37ba2046-4273-4570-87af-2cc6d03ca54a] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "coredns-7db6d8ff4d-n22jn" [053e60b3-41d0-4923-9655-02d7dacd691f] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "etcd-ha-513500" [63f6504e-f824-4c6d-afb9-92ed2f0457cd] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "etcd-ha-513500-m02" [2d63d157-843e-4750-b4b0-cfa577e7c8a1] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "etcd-ha-513500-m03" [5d7cba98-84b0-4b25-bbdb-189bf3a926db] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kindnet-9tv8w" [28dad06a-bed9-4b9c-a3b6-df814e1f3d7b] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kindnet-9w6qr" [eb7641e9-6df3-4b9f-b78c-e251de8ebf78] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kindnet-kdpql" [da068cd7-8925-45ed-a5a4-ff2db9d08bd8] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kube-apiserver-ha-513500" [e7a880e7-5218-4bde-9d62-532836751bbe] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kube-apiserver-ha-513500-m02" [52c1e20c-27a1-47d2-8405-4537727dac35] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kube-apiserver-ha-513500-m03" [7780dbcd-ed6c-4283-b93f-c725a0a78994] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kube-controller-manager-ha-513500" [bcf915a3-542c-422a-815b-823254b624ff] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kube-controller-manager-ha-513500-m02" [bc495cfd-bf88-4ef8-b33c-d252f4d9a717] Running
	I0429 19:30:36.628428   14108 system_pods.go:89] "kube-controller-manager-ha-513500-m03" [a4507291-9f79-4ad9-8331-22ae19067d63] Running
	I0429 19:30:36.628428   14108 system_pods.go:89] "kube-proxy-k4l6c" [2c1fff7e-2f97-497a-b6b6-0fcb6e2fcea6] Running
	I0429 19:30:36.628428   14108 system_pods.go:89] "kube-proxy-s7ddt" [46edafa6-bc34-47d0-b33e-881bb23d4262] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-proxy-tm7tv" [b4ba7f26-253c-4c1c-83f4-7251a2ad14d4] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-scheduler-ha-513500" [76e5a3e9-d895-406a-ad12-cbaa48b4c52d] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-scheduler-ha-513500-m02" [643c27a0-ca4d-499d-abd7-99aa504580cb] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-scheduler-ha-513500-m03" [d319fcbd-9d28-4fca-b9c8-6a7c64c129c9] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-vip-ha-513500" [bf461c57-113c-4b7b-987e-04dcc8c13373] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-vip-ha-513500-m02" [76f42a60-c769-42fe-ab90-963fe0ec3489] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-vip-ha-513500-m03" [e1568d39-7863-4071-b1b5-66713276b66b] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "storage-provisioner" [6a5df654-f7da-40f4-a05f-acf47aa779a1] Running
	I0429 19:30:36.628525   14108 system_pods.go:126] duration metric: took 220.8407ms to wait for k8s-apps to be running ...
	I0429 19:30:36.628525   14108 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:30:36.640520   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:30:36.670181   14108 system_svc.go:56] duration metric: took 41.6562ms WaitForService to wait for kubelet
	I0429 19:30:36.670181   14108 kubeadm.go:576] duration metric: took 16.3130297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:30:36.670181   14108 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:30:36.805333   14108 request.go:629] Waited for 135.0648ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes
	I0429 19:30:36.805532   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes
	I0429 19:30:36.805596   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:36.805596   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:36.805596   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:36.812355   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:36.813507   14108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:30:36.813507   14108 node_conditions.go:123] node cpu capacity is 2
	I0429 19:30:36.813507   14108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:30:36.813507   14108 node_conditions.go:123] node cpu capacity is 2
	I0429 19:30:36.813507   14108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:30:36.813507   14108 node_conditions.go:123] node cpu capacity is 2
	I0429 19:30:36.813507   14108 node_conditions.go:105] duration metric: took 143.3249ms to run NodePressure ...
	I0429 19:30:36.813507   14108 start.go:240] waiting for startup goroutines ...
	I0429 19:30:36.813507   14108 start.go:254] writing updated cluster config ...
	I0429 19:30:36.828524   14108 ssh_runner.go:195] Run: rm -f paused
	I0429 19:30:36.978823   14108 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 19:30:36.984290   14108 out.go:177] * Done! kubectl is now configured to use "ha-513500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 29 19:22:43 ha-513500 cri-dockerd[1230]: time="2024-04-29T19:22:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ab1a16ac763fee19b98313a7ff57572e1b39c95937cc1ed38c694cd851438405/resolv.conf as [nameserver 172.17.240.1]"
	Apr 29 19:22:43 ha-513500 cri-dockerd[1230]: time="2024-04-29T19:22:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ec7a4d754b09e617b6d79840ec1a84c263fd8ba0c368db27e16da49bd5b10531/resolv.conf as [nameserver 172.17.240.1]"
	Apr 29 19:22:43 ha-513500 cri-dockerd[1230]: time="2024-04-29T19:22:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/061b9ae8bb5d4b195594a8979ade6d2272b7c9f7056761091426f77917e0f5e0/resolv.conf as [nameserver 172.17.240.1]"
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.368598603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.369561785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.369745381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.378129322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.701790334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.701942431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.702035429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.702244725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.719394203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.719863994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.720055890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.720783976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:31:15 ha-513500 dockerd[1332]: time="2024-04-29T19:31:15.914662473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:31:15 ha-513500 dockerd[1332]: time="2024-04-29T19:31:15.915081269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:31:15 ha-513500 dockerd[1332]: time="2024-04-29T19:31:15.915401666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:31:15 ha-513500 dockerd[1332]: time="2024-04-29T19:31:15.915728063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:31:16 ha-513500 cri-dockerd[1230]: time="2024-04-29T19:31:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/50296d28c3005f998d69bc903c6ea6db48991e8d4409d10633aec53b4aff5d51/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 19:31:17 ha-513500 cri-dockerd[1230]: time="2024-04-29T19:31:17Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 19:31:17 ha-513500 dockerd[1332]: time="2024-04-29T19:31:17.751845061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:31:17 ha-513500 dockerd[1332]: time="2024-04-29T19:31:17.752028161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:31:17 ha-513500 dockerd[1332]: time="2024-04-29T19:31:17.752158561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:31:17 ha-513500 dockerd[1332]: time="2024-04-29T19:31:17.752406260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	31760b27e1330       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   50296d28c3005       busybox-fc5497c4f-k7nt6
	d364c1e6d94f1       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   061b9ae8bb5d4       coredns-7db6d8ff4d-n22jn
	fb655010c9750       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   ec7a4d754b09e       coredns-7db6d8ff4d-5jxcm
	ac90b27682671       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   ab1a16ac763fe       storage-provisioner
	05ddacd92005a       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago        Running             kindnet-cni               0                   85bfce17a67a6       kindnet-9w6qr
	c0ca10790ffe0       a0bf559e280cf                                                                                         9 minutes ago        Running             kube-proxy                0                   e86da83dd4c8b       kube-proxy-tm7tv
	3174d69f5cd02       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   f9b372bb3f346       kube-vip-ha-513500
	768ab6a9d4e64       259c8277fcbbc                                                                                         10 minutes ago       Running             kube-scheduler            0                   dea83193ee65c       kube-scheduler-ha-513500
	f2d43ad89ec76       c7aad43836fa5                                                                                         10 minutes ago       Running             kube-controller-manager   0                   df7c2aca21ced       kube-controller-manager-ha-513500
	24fcd8dc17cb7       c42f13656d0b2                                                                                         10 minutes ago       Running             kube-apiserver            0                   09e6ad066f403       kube-apiserver-ha-513500
	ddba464c39361       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   26bae3e1dab45       etcd-ha-513500
	
	
	==> coredns [d364c1e6d94f] <==
	[INFO] 10.244.0.4:33090 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002173s
	[INFO] 10.244.2.2:49076 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001087s
	[INFO] 10.244.2.2:55220 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003780794s
	[INFO] 10.244.2.2:38013 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001336s
	[INFO] 10.244.2.2:46025 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002227s
	[INFO] 10.244.2.2:54398 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.002697796s
	[INFO] 10.244.2.2:49424 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000057s
	[INFO] 10.244.2.2:35058 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057999s
	[INFO] 10.244.2.2:36567 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000527s
	[INFO] 10.244.1.2:56534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000299299s
	[INFO] 10.244.1.2:56209 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002045s
	[INFO] 10.244.1.2:46058 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000607s
	[INFO] 10.244.0.4:42958 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268999s
	[INFO] 10.244.0.4:36079 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000558699s
	[INFO] 10.244.0.4:35768 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002419s
	[INFO] 10.244.2.2:38045 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149499s
	[INFO] 10.244.1.2:56344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001919s
	[INFO] 10.244.1.2:34882 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001945s
	[INFO] 10.244.1.2:52415 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001108s
	[INFO] 10.244.0.4:60373 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002066s
	[INFO] 10.244.0.4:39593 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002968s
	[INFO] 10.244.2.2:56962 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001511s
	[INFO] 10.244.1.2:51827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002166s
	[INFO] 10.244.1.2:55197 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000845s
	[INFO] 10.244.1.2:59450 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001143s
	
	
	==> coredns [fb655010c975] <==
	[INFO] 10.244.2.2:54946 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000280099s
	[INFO] 10.244.2.2:58062 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003970695s
	[INFO] 10.244.2.2:51140 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000914s
	[INFO] 10.244.1.2:50028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252599s
	[INFO] 10.244.1.2:60257 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000904s
	[INFO] 10.244.0.4:52149 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.027238658s
	[INFO] 10.244.0.4:58086 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000377799s
	[INFO] 10.244.0.4:36347 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210399s
	[INFO] 10.244.0.4:39627 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004474094s
	[INFO] 10.244.1.2:48954 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00015s
	[INFO] 10.244.1.2:53680 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000647s
	[INFO] 10.244.1.2:46398 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001741s
	[INFO] 10.244.1.2:56009 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002674s
	[INFO] 10.244.1.2:46005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001633s
	[INFO] 10.244.0.4:36504 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000722s
	[INFO] 10.244.2.2:33735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001708s
	[INFO] 10.244.2.2:37320 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063s
	[INFO] 10.244.2.2:47242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000605s
	[INFO] 10.244.1.2:56773 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001174s
	[INFO] 10.244.0.4:58475 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000397999s
	[INFO] 10.244.0.4:58342 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001854s
	[INFO] 10.244.2.2:58709 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001071s
	[INFO] 10.244.2.2:58185 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001712s
	[INFO] 10.244.2.2:43286 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000634s
	[INFO] 10.244.1.2:52086 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000417199s
	
	
	==> describe nodes <==
	Name:               ha-513500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-513500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-513500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T19_22_20_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:22:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-513500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:32:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:31:20 +0000   Mon, 29 Apr 2024 19:22:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:31:20 +0000   Mon, 29 Apr 2024 19:22:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:31:20 +0000   Mon, 29 Apr 2024 19:22:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:31:20 +0000   Mon, 29 Apr 2024 19:22:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.240.42
	  Hostname:    ha-513500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3369ba1532804e80b04fd813c27bd99a
	  System UUID:                1d78230c-499d-7745-aa2e-7c4bf305bc50
	  Boot ID:                    5a7d9e7d-780b-43c5-8522-a1cdbef43f6b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k7nt6              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 coredns-7db6d8ff4d-5jxcm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m50s
	  kube-system                 coredns-7db6d8ff4d-n22jn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m50s
	  kube-system                 etcd-ha-513500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-9w6qr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m50s
	  kube-system                 kube-apiserver-ha-513500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-513500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-tm7tv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-scheduler-ha-513500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-513500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m48s  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node ha-513500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node ha-513500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m    kubelet          Node ha-513500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m    kubelet          Node ha-513500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m51s  node-controller  Node ha-513500 event: Registered Node ha-513500 in Controller
	  Normal  NodeReady                9m41s  kubelet          Node ha-513500 status is now: NodeReady
	  Normal  RegisteredNode           5m41s  node-controller  Node ha-513500 event: Registered Node ha-513500 in Controller
	  Normal  RegisteredNode           108s   node-controller  Node ha-513500 event: Registered Node ha-513500 in Controller
	
	
	Name:               ha-513500-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-513500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-513500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_26_24_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:26:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-513500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:32:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:31:25 +0000   Mon, 29 Apr 2024 19:26:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:31:25 +0000   Mon, 29 Apr 2024 19:26:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:31:25 +0000   Mon, 29 Apr 2024 19:26:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:31:25 +0000   Mon, 29 Apr 2024 19:26:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.247.146
	  Hostname:    ha-513500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ac51f9045144bd8a5e94498bc8b29b2
	  System UUID:                161b36b5-754a-9741-b399-febb088d3a37
	  Boot ID:                    2849bf10-85a2-4a05-ade6-24e1c44b59eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-txsvr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 etcd-ha-513500-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m59s
	  kube-system                 kindnet-kdpql                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m3s
	  kube-system                 kube-apiserver-ha-513500-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-controller-manager-ha-513500-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-proxy-k4l6c                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-scheduler-ha-513500-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-vip-ha-513500-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m3s)  kubelet          Node ha-513500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m3s)  kubelet          Node ha-513500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m3s)  kubelet          Node ha-513500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m1s                 node-controller  Node ha-513500-m02 event: Registered Node ha-513500-m02 in Controller
	  Normal  RegisteredNode           5m41s                node-controller  Node ha-513500-m02 event: Registered Node ha-513500-m02 in Controller
	  Normal  RegisteredNode           108s                 node-controller  Node ha-513500-m02 event: Registered Node ha-513500-m02 in Controller
	
	
	Name:               ha-513500-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-513500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-513500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_30_19_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:30:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-513500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:32:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:31:43 +0000   Mon, 29 Apr 2024 19:30:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:31:43 +0000   Mon, 29 Apr 2024 19:30:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:31:43 +0000   Mon, 29 Apr 2024 19:30:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:31:43 +0000   Mon, 29 Apr 2024 19:30:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.246.101
	  Hostname:    ha-513500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 352c8de6680a4d82b936c28b2c2b4af4
	  System UUID:                22f68df9-d6ea-da42-b3a5-feb527052c05
	  Boot ID:                    3d436fce-cbcf-4e43-a244-5b32e568972d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k7rdw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 etcd-ha-513500-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m10s
	  kube-system                 kindnet-9tv8w                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m11s
	  kube-system                 kube-apiserver-ha-513500-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-controller-manager-ha-513500-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-proxy-s7ddt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-scheduler-ha-513500-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-vip-ha-513500-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-513500-m03 event: Registered Node ha-513500-m03 in Controller
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-513500-m03 event: Registered Node ha-513500-m03 in Controller
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x2 over 2m11s)  kubelet          Node ha-513500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x2 over 2m11s)  kubelet          Node ha-513500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x2 over 2m11s)  kubelet          Node ha-513500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                116s                   kubelet          Node ha-513500-m03 status is now: NodeReady
	  Normal  RegisteredNode           108s                   node-controller  Node ha-513500-m03 event: Registered Node ha-513500-m03 in Controller
	
	
	==> dmesg <==
	[  +7.439818] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 19:21] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.189015] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +31.573298] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.108323] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.582257] systemd-fstab-generator[984]: Ignoring "noauto" option for root device
	[  +0.210246] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.257185] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +2.920377] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.213026] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.228538] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.302335] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.648892] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.109564] kauditd_printk_skb: 205 callbacks suppressed
	[Apr29 19:22] systemd-fstab-generator[1523]: Ignoring "noauto" option for root device
	[  +6.088578] systemd-fstab-generator[1718]: Ignoring "noauto" option for root device
	[  +0.121125] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.996068] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.426455] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[ +14.311868] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.109377] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.382854] kauditd_printk_skb: 33 callbacks suppressed
	[Apr29 19:26] hrtimer: interrupt took 6254643 ns
	
	
	==> etcd [ddba464c3936] <==
	{"level":"warn","ts":"2024-04-29T19:30:12.578714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.551553ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-ha-513500-m03\" ","response":"range_response_count:1 size:4930"}
	{"level":"info","ts":"2024-04-29T19:30:12.578836Z","caller":"traceutil/trace.go:171","msg":"trace[1003465577] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-ha-513500-m03; range_end:; response_count:1; response_revision:1476; }","duration":"187.753153ms","start":"2024-04-29T19:30:12.391073Z","end":"2024-04-29T19:30:12.578827Z","steps":["trace[1003465577] 'agreement among raft nodes before linearized reading'  (duration: 187.526454ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T19:30:12.701717Z","caller":"traceutil/trace.go:171","msg":"trace[477180902] transaction","detail":"{read_only:false; response_revision:1477; number_of_response:1; }","duration":"100.112048ms","start":"2024-04-29T19:30:12.601554Z","end":"2024-04-29T19:30:12.701666Z","steps":["trace[477180902] 'process raft request'  (duration: 96.469864ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:30:13.02796Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"fed191e7cbe02a93","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-04-29T19:30:14.026824Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"fed191e7cbe02a93","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-04-29T19:30:14.801444Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3ede46bdb03fb638","to":"fed191e7cbe02a93","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-29T19:30:14.80705Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"fed191e7cbe02a93"}
	{"level":"info","ts":"2024-04-29T19:30:14.807277Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3ede46bdb03fb638","remote-peer-id":"fed191e7cbe02a93"}
	{"level":"info","ts":"2024-04-29T19:30:14.882551Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3ede46bdb03fb638","to":"fed191e7cbe02a93","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-29T19:30:14.882957Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3ede46bdb03fb638","remote-peer-id":"fed191e7cbe02a93"}
	{"level":"info","ts":"2024-04-29T19:30:14.906226Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3ede46bdb03fb638","remote-peer-id":"fed191e7cbe02a93"}
	{"level":"info","ts":"2024-04-29T19:30:14.912104Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3ede46bdb03fb638","remote-peer-id":"fed191e7cbe02a93"}
	{"level":"warn","ts":"2024-04-29T19:30:15.028304Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"fed191e7cbe02a93","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-04-29T19:30:16.027743Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"fed191e7cbe02a93","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-04-29T19:30:17.027649Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"fed191e7cbe02a93","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-04-29T19:30:17.473364Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"fed191e7cbe02a93","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"112.510131ms"}
	{"level":"warn","ts":"2024-04-29T19:30:17.473459Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a30859e8a544b3c9","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"112.610231ms"}
	{"level":"warn","ts":"2024-04-29T19:30:17.634747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.218695ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13130402129766002495 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:36388f2b4ebc2f3e>","response":"size:41"}
	{"level":"warn","ts":"2024-04-29T19:30:17.635198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T19:30:17.219402Z","time spent":"415.793413ms","remote":"127.0.0.1:33368","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-04-29T19:30:18.534059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3ede46bdb03fb638 switched to configuration voters=(4530136055701026360 11747738483735966665 18361617580510161555)"}
	{"level":"info","ts":"2024-04-29T19:30:18.534477Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"d46d9de32197e8a1","local-member-id":"3ede46bdb03fb638"}
	{"level":"info","ts":"2024-04-29T19:30:18.535017Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"3ede46bdb03fb638","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"fed191e7cbe02a93"}
	{"level":"info","ts":"2024-04-29T19:32:13.034268Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1052}
	{"level":"info","ts":"2024-04-29T19:32:13.149959Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1052,"took":"115.075675ms","hash":2220340767,"current-db-size-bytes":3608576,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2113536,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-04-29T19:32:13.150152Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2220340767,"revision":1052,"compact-revision":-1}
	
	
	==> kernel <==
	 19:32:23 up 12 min,  0 users,  load average: 0.72, 0.82, 0.46
	Linux ha-513500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [05ddacd92005] <==
	I0429 19:31:41.990910       1 main.go:250] Node ha-513500-m03 has CIDR [10.244.2.0/24] 
	I0429 19:31:52.007544       1 main.go:223] Handling node with IPs: map[172.17.240.42:{}]
	I0429 19:31:52.008168       1 main.go:227] handling current node
	I0429 19:31:52.008474       1 main.go:223] Handling node with IPs: map[172.17.247.146:{}]
	I0429 19:31:52.008725       1 main.go:250] Node ha-513500-m02 has CIDR [10.244.1.0/24] 
	I0429 19:31:52.009130       1 main.go:223] Handling node with IPs: map[172.17.246.101:{}]
	I0429 19:31:52.009384       1 main.go:250] Node ha-513500-m03 has CIDR [10.244.2.0/24] 
	I0429 19:32:02.018547       1 main.go:223] Handling node with IPs: map[172.17.240.42:{}]
	I0429 19:32:02.018611       1 main.go:227] handling current node
	I0429 19:32:02.018627       1 main.go:223] Handling node with IPs: map[172.17.247.146:{}]
	I0429 19:32:02.018636       1 main.go:250] Node ha-513500-m02 has CIDR [10.244.1.0/24] 
	I0429 19:32:02.019290       1 main.go:223] Handling node with IPs: map[172.17.246.101:{}]
	I0429 19:32:02.019331       1 main.go:250] Node ha-513500-m03 has CIDR [10.244.2.0/24] 
	I0429 19:32:12.029250       1 main.go:223] Handling node with IPs: map[172.17.240.42:{}]
	I0429 19:32:12.029474       1 main.go:227] handling current node
	I0429 19:32:12.029511       1 main.go:223] Handling node with IPs: map[172.17.247.146:{}]
	I0429 19:32:12.029536       1 main.go:250] Node ha-513500-m02 has CIDR [10.244.1.0/24] 
	I0429 19:32:12.029725       1 main.go:223] Handling node with IPs: map[172.17.246.101:{}]
	I0429 19:32:12.029790       1 main.go:250] Node ha-513500-m03 has CIDR [10.244.2.0/24] 
	I0429 19:32:22.039374       1 main.go:223] Handling node with IPs: map[172.17.240.42:{}]
	I0429 19:32:22.040139       1 main.go:227] handling current node
	I0429 19:32:22.040238       1 main.go:223] Handling node with IPs: map[172.17.247.146:{}]
	I0429 19:32:22.040386       1 main.go:250] Node ha-513500-m02 has CIDR [10.244.1.0/24] 
	I0429 19:32:22.040982       1 main.go:223] Handling node with IPs: map[172.17.246.101:{}]
	I0429 19:32:22.041075       1 main.go:250] Node ha-513500-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [24fcd8dc17cb] <==
	Trace[613677868]:  ---"Txn call completed" 553ms (19:25:30.890)]
	Trace[613677868]: [554.389358ms] [554.389358ms] END
	I0429 19:29:48.279912       1 trace.go:236] Trace[1199958537]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.17.240.42,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 19:29:47.763) (total time: 516ms):
	Trace[1199958537]: ---"Txn call completed" 452ms (19:29:48.279)
	Trace[1199958537]: [516.087708ms] [516.087708ms] END
	E0429 19:30:12.349609       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 19:30:12.350564       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 19:30:12.349837       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 70.099µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 19:30:12.352361       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 19:30:12.352584       1 timeout.go:142] post-timeout activity - time-elapsed: 3.070286ms, POST "/api/v1/namespaces/kube-system/pods" result: <nil>
	E0429 19:31:21.880017       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53683: use of closed network connection
	E0429 19:31:23.549927       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53685: use of closed network connection
	E0429 19:31:24.101642       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53688: use of closed network connection
	E0429 19:31:24.700337       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53690: use of closed network connection
	E0429 19:31:25.281787       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53692: use of closed network connection
	E0429 19:31:25.856295       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53694: use of closed network connection
	E0429 19:31:26.427647       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53696: use of closed network connection
	E0429 19:31:26.987821       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53698: use of closed network connection
	E0429 19:31:27.554887       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53700: use of closed network connection
	E0429 19:31:28.600273       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53703: use of closed network connection
	E0429 19:31:39.196639       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53705: use of closed network connection
	E0429 19:31:39.791837       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53708: use of closed network connection
	E0429 19:31:50.358343       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53710: use of closed network connection
	E0429 19:31:50.946983       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53713: use of closed network connection
	E0429 19:32:01.554032       1 conn.go:339] Error on socket receive: read tcp 172.17.255.254:8443->172.17.240.1:53715: use of closed network connection
	
	
	==> kube-controller-manager [f2d43ad89ec7] <==
	I0429 19:22:42.096487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.398µs"
	I0429 19:22:42.172142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="100.098µs"
	I0429 19:22:44.658157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="242.093µs"
	I0429 19:22:44.715021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.872979ms"
	I0429 19:22:44.718234       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.698µs"
	I0429 19:22:44.801838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.726124ms"
	I0429 19:22:44.802211       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="228.893µs"
	I0429 19:22:46.511544       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 19:26:19.523490       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-513500-m02\" does not exist"
	I0429 19:26:19.539501       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-513500-m02" podCIDRs=["10.244.1.0/24"]
	I0429 19:26:21.558506       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-513500-m02"
	I0429 19:30:11.531218       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-513500-m03\" does not exist"
	I0429 19:30:11.564113       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-513500-m03" podCIDRs=["10.244.2.0/24"]
	I0429 19:30:11.605524       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-513500-m03"
	I0429 19:31:14.980963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.830652ms"
	I0429 19:31:15.180461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="199.10268ms"
	I0429 19:31:15.504804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="324.26947ms"
	I0429 19:31:15.656125       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.25264ms"
	I0429 19:31:15.656568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="357.197µs"
	I0429 19:31:18.248560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.324252ms"
	I0429 19:31:18.249018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.2µs"
	I0429 19:31:18.506585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.835664ms"
	I0429 19:31:18.508020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.3µs"
	I0429 19:31:18.590521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.439075ms"
	I0429 19:31:18.592258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.7µs"
	
	
	==> kube-proxy [c0ca10790ffe] <==
	I0429 19:22:34.055101       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:22:34.089710       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.240.42"]
	I0429 19:22:34.143942       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:22:34.144039       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:22:34.144064       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:22:34.151484       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:22:34.152452       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:22:34.152502       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:22:34.159175       1 config.go:192] "Starting service config controller"
	I0429 19:22:34.159944       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:22:34.159998       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:22:34.160006       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:22:34.163187       1 config.go:319] "Starting node config controller"
	I0429 19:22:34.163226       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:22:34.260818       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:22:34.260751       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:22:34.264047       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [768ab6a9d4e6] <==
	W0429 19:22:16.282776       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 19:22:16.283059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 19:22:16.342538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 19:22:16.342776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 19:22:16.349978       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:22:16.350032       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 19:22:16.410571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:22:16.411241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:22:16.519007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:22:16.519170       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:22:16.556273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 19:22:16.556720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 19:22:16.666413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:22:16.667013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 19:22:16.802894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 19:22:16.803021       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 19:22:16.836025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:22:16.836503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:22:16.901987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 19:22:16.902577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 19:22:19.381948       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 19:30:11.647198       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9tv8w\": pod kindnet-9tv8w is already assigned to node \"ha-513500-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-9tv8w" node="ha-513500-m03"
	E0429 19:30:11.647661       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 28dad06a-bed9-4b9c-a3b6-df814e1f3d7b(kube-system/kindnet-9tv8w) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9tv8w"
	E0429 19:30:11.647976       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9tv8w\": pod kindnet-9tv8w is already assigned to node \"ha-513500-m03\"" pod="kube-system/kindnet-9tv8w"
	I0429 19:30:11.648139       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9tv8w" node="ha-513500-m03"
	
	
	==> kubelet <==
	Apr 29 19:28:19 ha-513500 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:28:19 ha-513500 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:29:19 ha-513500 kubelet[2212]: E0429 19:29:19.578749    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:29:19 ha-513500 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:29:19 ha-513500 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:29:19 ha-513500 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:29:19 ha-513500 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:30:19 ha-513500 kubelet[2212]: E0429 19:30:19.579451    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:30:19 ha-513500 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:30:19 ha-513500 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:30:19 ha-513500 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:30:19 ha-513500 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:31:15 ha-513500 kubelet[2212]: I0429 19:31:15.007922    2212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5jxcm" podStartSLOduration=523.007714025 podStartE2EDuration="8m43.007714025s" podCreationTimestamp="2024-04-29 19:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 19:22:44.761947113 +0000 UTC m=+25.541742506" watchObservedRunningTime="2024-04-29 19:31:15.007714025 +0000 UTC m=+535.787509318"
	Apr 29 19:31:15 ha-513500 kubelet[2212]: I0429 19:31:15.008936    2212 topology_manager.go:215] "Topology Admit Handler" podUID="43b8eb97-6dbb-4edb-adc8-0e6dced7260c" podNamespace="default" podName="busybox-fc5497c4f-k7nt6"
	Apr 29 19:31:15 ha-513500 kubelet[2212]: I0429 19:31:15.172065    2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7lft\" (UniqueName: \"kubernetes.io/projected/43b8eb97-6dbb-4edb-adc8-0e6dced7260c-kube-api-access-l7lft\") pod \"busybox-fc5497c4f-k7nt6\" (UID: \"43b8eb97-6dbb-4edb-adc8-0e6dced7260c\") " pod="default/busybox-fc5497c4f-k7nt6"
	Apr 29 19:31:19 ha-513500 kubelet[2212]: E0429 19:31:19.584754    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:31:19 ha-513500 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:31:19 ha-513500 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:31:19 ha-513500 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:31:19 ha-513500 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:32:19 ha-513500 kubelet[2212]: E0429 19:32:19.585283    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:32:19 ha-513500 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:32:19 ha-513500 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:32:19 ha-513500 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:32:19 ha-513500 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:32:14.304803   14276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-513500 -n ha-513500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-513500 -n ha-513500: (12.6253602s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-513500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (70.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (98.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 node stop m02 -v=7 --alsologtostderr
E0429 19:48:10.203547   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 node stop m02 -v=7 --alsologtostderr: (35.7049578s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-513500 status -v=7 --alsologtostderr: exit status 1 (26.5135774s)

                                                
                                                
** stderr ** 
	W0429 19:48:43.696127    2004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 19:48:43.788056    2004 out.go:291] Setting OutFile to fd 1044 ...
	I0429 19:48:43.789048    2004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:48:43.789048    2004 out.go:304] Setting ErrFile to fd 1488...
	I0429 19:48:43.789048    2004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:48:43.811215    2004 out.go:298] Setting JSON to false
	I0429 19:48:43.811215    2004 mustload.go:65] Loading cluster: ha-513500
	I0429 19:48:43.811870    2004 notify.go:220] Checking for updates...
	I0429 19:48:43.811968    2004 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:48:43.811968    2004 status.go:255] checking status of ha-513500 ...
	I0429 19:48:43.812821    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:48:46.049744    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:48:46.049744    2004 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:48:46.049744    2004 status.go:330] ha-513500 host status = "Running" (err=<nil>)
	I0429 19:48:46.049744    2004 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:48:46.049744    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:48:48.297234    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:48:48.297337    2004 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:48:48.297462    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:48:51.036025    2004 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:48:51.036970    2004 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:48:51.036970    2004 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:48:51.053426    2004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:48:51.053426    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:48:53.249528    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:48:53.249849    2004 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:48:53.249896    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:48:55.902103    2004 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:48:55.902103    2004 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:48:55.903261    2004 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:48:56.003133    2004 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9496161s)
	I0429 19:48:56.017665    2004 ssh_runner.go:195] Run: systemctl --version
	I0429 19:48:56.045585    2004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:48:56.075436    2004 kubeconfig.go:125] found "ha-513500" server: "https://172.17.255.254:8443"
	I0429 19:48:56.075436    2004 api_server.go:166] Checking apiserver status ...
	I0429 19:48:56.090907    2004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:48:56.141315    2004 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2027/cgroup
	W0429 19:48:56.160739    2004 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2027/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:48:56.179445    2004 ssh_runner.go:195] Run: ls
	I0429 19:48:56.188656    2004 api_server.go:253] Checking apiserver healthz at https://172.17.255.254:8443/healthz ...
	I0429 19:48:56.196244    2004 api_server.go:279] https://172.17.255.254:8443/healthz returned 200:
	ok
	I0429 19:48:56.196413    2004 status.go:422] ha-513500 apiserver status = Running (err=<nil>)
	I0429 19:48:56.196413    2004 status.go:257] ha-513500 status: &{Name:ha-513500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:48:56.196413    2004 status.go:255] checking status of ha-513500-m02 ...
	I0429 19:48:56.197235    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:48:58.281055    2004 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 19:48:58.281732    2004 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:48:58.281732    2004 status.go:330] ha-513500-m02 host status = "Stopped" (err=<nil>)
	I0429 19:48:58.281732    2004 status.go:343] host is not running, skipping remaining checks
	I0429 19:48:58.281732    2004 status.go:257] ha-513500-m02 status: &{Name:ha-513500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:48:58.281847    2004 status.go:255] checking status of ha-513500-m03 ...
	I0429 19:48:58.282721    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:49:00.531780    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:49:00.531780    2004 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:49:00.532636    2004 status.go:330] ha-513500-m03 host status = "Running" (err=<nil>)
	I0429 19:49:00.532636    2004 host.go:66] Checking if "ha-513500-m03" exists ...
	I0429 19:49:00.533308    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:49:02.756905    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:49:02.757596    2004 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:49:02.757596    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:49:05.436423    2004 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:49:05.436423    2004 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:49:05.436423    2004 host.go:66] Checking if "ha-513500-m03" exists ...
	I0429 19:49:05.453027    2004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:49:05.453027    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:49:07.626620    2004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:49:07.626620    2004 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:49:07.627407    2004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-513500 status -v=7 --alsologtostderr" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-513500 -n ha-513500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-513500 -n ha-513500: (12.4264423s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 logs -n 25: (9.204591s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt                                                                      | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:43 UTC | 29 Apr 24 19:43 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile875035895\001\cp-test_ha-513500-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n                                                                                                         | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:43 UTC | 29 Apr 24 19:43 UTC |
	|         | ha-513500-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt                                                                      | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:43 UTC | 29 Apr 24 19:44 UTC |
	|         | ha-513500:/home/docker/cp-test_ha-513500-m03_ha-513500.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n                                                                                                         | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:44 UTC | 29 Apr 24 19:44 UTC |
	|         | ha-513500-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n ha-513500 sudo cat                                                                                      | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:44 UTC | 29 Apr 24 19:44 UTC |
	|         | /home/docker/cp-test_ha-513500-m03_ha-513500.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt                                                                      | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:44 UTC | 29 Apr 24 19:44 UTC |
	|         | ha-513500-m02:/home/docker/cp-test_ha-513500-m03_ha-513500-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n                                                                                                         | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:44 UTC | 29 Apr 24 19:44 UTC |
	|         | ha-513500-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n ha-513500-m02 sudo cat                                                                                  | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:44 UTC | 29 Apr 24 19:45 UTC |
	|         | /home/docker/cp-test_ha-513500-m03_ha-513500-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt                                                                      | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:45 UTC | 29 Apr 24 19:45 UTC |
	|         | ha-513500-m04:/home/docker/cp-test_ha-513500-m03_ha-513500-m04.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n                                                                                                         | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:45 UTC | 29 Apr 24 19:45 UTC |
	|         | ha-513500-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n ha-513500-m04 sudo cat                                                                                  | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:45 UTC | 29 Apr 24 19:45 UTC |
	|         | /home/docker/cp-test_ha-513500-m03_ha-513500-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-513500 cp testdata\cp-test.txt                                                                                        | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:45 UTC | 29 Apr 24 19:45 UTC |
	|         | ha-513500-m04:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n                                                                                                         | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:45 UTC | 29 Apr 24 19:45 UTC |
	|         | ha-513500-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt                                                                      | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:45 UTC | 29 Apr 24 19:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile875035895\001\cp-test_ha-513500-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n                                                                                                         | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:46 UTC | 29 Apr 24 19:46 UTC |
	|         | ha-513500-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt                                                                      | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:46 UTC | 29 Apr 24 19:46 UTC |
	|         | ha-513500:/home/docker/cp-test_ha-513500-m04_ha-513500.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n                                                                                                         | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:46 UTC | 29 Apr 24 19:46 UTC |
	|         | ha-513500-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n ha-513500 sudo cat                                                                                      | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:46 UTC | 29 Apr 24 19:46 UTC |
	|         | /home/docker/cp-test_ha-513500-m04_ha-513500.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt                                                                      | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:46 UTC | 29 Apr 24 19:47 UTC |
	|         | ha-513500-m02:/home/docker/cp-test_ha-513500-m04_ha-513500-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n                                                                                                         | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:47 UTC | 29 Apr 24 19:47 UTC |
	|         | ha-513500-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n ha-513500-m02 sudo cat                                                                                  | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:47 UTC | 29 Apr 24 19:47 UTC |
	|         | /home/docker/cp-test_ha-513500-m04_ha-513500-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt                                                                      | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:47 UTC | 29 Apr 24 19:47 UTC |
	|         | ha-513500-m03:/home/docker/cp-test_ha-513500-m04_ha-513500-m03.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n                                                                                                         | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:47 UTC | 29 Apr 24 19:47 UTC |
	|         | ha-513500-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-513500 ssh -n ha-513500-m03 sudo cat                                                                                  | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:47 UTC | 29 Apr 24 19:48 UTC |
	|         | /home/docker/cp-test_ha-513500-m04_ha-513500-m03.txt                                                                     |           |                   |         |                     |                     |
	| node    | ha-513500 node stop m02 -v=7                                                                                             | ha-513500 | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:48 UTC | 29 Apr 24 19:48 UTC |
	|         | --alsologtostderr                                                                                                        |           |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:19:10
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:19:10.246588   14108 out.go:291] Setting OutFile to fd 1360 ...
	I0429 19:19:10.246588   14108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:19:10.246588   14108 out.go:304] Setting ErrFile to fd 1400...
	I0429 19:19:10.246588   14108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:19:10.270559   14108 out.go:298] Setting JSON to false
	I0429 19:19:10.274558   14108 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20289,"bootTime":1714398060,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 19:19:10.274558   14108 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 19:19:10.279558   14108 out.go:177] * [ha-513500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 19:19:10.286568   14108 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:19:10.285566   14108 notify.go:220] Checking for updates...
	I0429 19:19:10.291567   14108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:19:10.293560   14108 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 19:19:10.296738   14108 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:19:10.300635   14108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:19:10.304735   14108 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:19:15.746631   14108 out.go:177] * Using the hyperv driver based on user configuration
	I0429 19:19:15.751891   14108 start.go:297] selected driver: hyperv
	I0429 19:19:15.751891   14108 start.go:901] validating driver "hyperv" against <nil>
	I0429 19:19:15.751891   14108 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:19:15.805253   14108 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 19:19:15.806756   14108 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:19:15.807278   14108 cni.go:84] Creating CNI manager for ""
	I0429 19:19:15.807278   14108 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 19:19:15.807278   14108 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 19:19:15.807525   14108 start.go:340] cluster config:
	{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:19:15.807525   14108 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:19:15.812737   14108 out.go:177] * Starting "ha-513500" primary control-plane node in "ha-513500" cluster
	I0429 19:19:15.815539   14108 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 19:19:15.816075   14108 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 19:19:15.816075   14108 cache.go:56] Caching tarball of preloaded images
	I0429 19:19:15.816075   14108 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 19:19:15.816702   14108 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 19:19:15.817058   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:19:15.817582   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json: {Name:mk44f90f8510bd5a50ac9a4fb1e24e93a65c8594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:19:15.818541   14108 start.go:360] acquireMachinesLock for ha-513500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:19:15.818541   14108 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-513500"
	I0429 19:19:15.819064   14108 start.go:93] Provisioning new machine with config: &{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:19:15.819156   14108 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 19:19:15.823822   14108 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 19:19:15.823822   14108 start.go:159] libmachine.API.Create for "ha-513500" (driver="hyperv")
	I0429 19:19:15.823822   14108 client.go:168] LocalClient.Create starting
	I0429 19:19:15.823822   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 19:19:15.824828   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:19:15.824828   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:19:15.825406   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 19:19:15.825406   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:19:15.825406   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:19:15.825406   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 19:19:17.955349   14108 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 19:19:17.955349   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:17.956362   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 19:19:19.786601   14108 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 19:19:19.787270   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:19.787373   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:19:21.362994   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:19:21.362994   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:21.363714   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:19:24.969886   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:19:24.969952   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:24.972613   14108 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:19:25.529263   14108 main.go:141] libmachine: Creating SSH key...
	I0429 19:19:25.667238   14108 main.go:141] libmachine: Creating VM...
	I0429 19:19:25.667238   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:19:28.583767   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:19:28.584862   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:28.584862   14108 main.go:141] libmachine: Using switch "Default Switch"
	I0429 19:19:28.585014   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:19:30.459472   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:19:30.460556   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:30.460556   14108 main.go:141] libmachine: Creating VHD
	I0429 19:19:30.460663   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 19:19:34.192917   14108 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D03D646A-0F76-4175-BEF8-7B7ECC51E326
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 19:19:34.192985   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:34.192985   14108 main.go:141] libmachine: Writing magic tar header
	I0429 19:19:34.193146   14108 main.go:141] libmachine: Writing SSH key tar header
	I0429 19:19:34.206514   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 19:19:37.358361   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:37.358888   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:37.358888   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\disk.vhd' -SizeBytes 20000MB
	I0429 19:19:39.911695   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:39.911781   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:39.911781   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-513500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 19:19:43.586735   14108 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-513500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 19:19:43.586735   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:43.587582   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-513500 -DynamicMemoryEnabled $false
	I0429 19:19:45.801894   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:45.801894   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:45.801975   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-513500 -Count 2
	I0429 19:19:47.960247   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:47.960247   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:47.961008   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-513500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\boot2docker.iso'
	I0429 19:19:50.527908   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:50.527908   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:50.527908   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-513500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\disk.vhd'
	I0429 19:19:53.221294   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:53.221485   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:53.221485   14108 main.go:141] libmachine: Starting VM...
	I0429 19:19:53.221546   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-513500
	I0429 19:19:56.276471   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:19:56.277467   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:56.277517   14108 main.go:141] libmachine: Waiting for host to start...
	I0429 19:19:56.277622   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:19:58.505529   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:19:58.505529   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:19:58.505529   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:01.060141   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:20:01.060141   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:02.063828   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:04.275033   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:04.275428   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:04.275517   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:06.784135   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:20:06.784612   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:07.800321   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:09.970959   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:09.971020   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:09.971268   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:12.479461   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:20:12.479461   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:13.484174   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:15.688730   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:15.688730   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:15.688820   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:18.209336   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:20:18.209336   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:19.224347   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:21.361528   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:21.361528   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:21.361528   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:24.072091   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:24.072091   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:24.072091   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:26.221431   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:26.221825   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:26.221910   14108 machine.go:94] provisionDockerMachine start ...
	I0429 19:20:26.222027   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:28.380817   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:28.380998   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:28.381098   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:30.976776   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:30.976776   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:30.983524   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:20:30.993618   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:20:30.993618   14108 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:20:31.132229   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 19:20:31.132391   14108 buildroot.go:166] provisioning hostname "ha-513500"
	I0429 19:20:31.132391   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:33.325249   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:33.325249   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:33.325545   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:35.881046   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:35.881125   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:35.888317   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:20:35.888900   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:20:35.888900   14108 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-513500 && echo "ha-513500" | sudo tee /etc/hostname
	I0429 19:20:36.036259   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-513500
	
	I0429 19:20:36.036391   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:38.113040   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:38.114051   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:38.114051   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:40.630914   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:40.631217   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:40.637214   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:20:40.637889   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:20:40.637889   14108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:20:40.784649   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:20:40.784649   14108 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 19:20:40.784649   14108 buildroot.go:174] setting up certificates
	I0429 19:20:40.784649   14108 provision.go:84] configureAuth start
	I0429 19:20:40.785222   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:42.878866   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:42.878866   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:42.878960   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:45.430770   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:45.430952   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:45.431037   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:47.544305   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:47.544305   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:47.545029   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:50.080352   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:50.081024   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:50.081024   14108 provision.go:143] copyHostCerts
	I0429 19:20:50.081024   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 19:20:50.081024   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 19:20:50.081024   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 19:20:50.082037   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 19:20:50.082963   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 19:20:50.082963   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 19:20:50.083497   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 19:20:50.083567   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 19:20:50.084221   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 19:20:50.084855   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 19:20:50.084855   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 19:20:50.085390   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 19:20:50.086284   14108 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-513500 san=[127.0.0.1 172.17.240.42 ha-513500 localhost minikube]
	I0429 19:20:50.333962   14108 provision.go:177] copyRemoteCerts
	I0429 19:20:50.347400   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:20:50.347483   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:52.478342   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:52.478342   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:52.478549   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:20:55.045866   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:20:55.045866   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:55.046174   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:20:55.157056   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8096221s)
	I0429 19:20:55.157251   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 19:20:55.157251   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 19:20:55.204748   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 19:20:55.204748   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 19:20:55.252301   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 19:20:55.256671   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 19:20:55.307875   14108 provision.go:87] duration metric: took 14.5231243s to configureAuth
	I0429 19:20:55.307875   14108 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:20:55.308484   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:20:55.308484   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:20:57.428610   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:20:57.428817   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:20:57.428914   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:00.005952   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:00.006879   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:00.013913   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:21:00.014453   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:21:00.014453   14108 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 19:21:00.151165   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 19:21:00.151165   14108 buildroot.go:70] root file system type: tmpfs
	I0429 19:21:00.151702   14108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 19:21:00.151846   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:02.320088   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:02.321063   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:02.321063   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:04.901561   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:04.902359   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:04.908819   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:21:04.909575   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:21:04.909575   14108 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 19:21:05.074578   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 19:21:05.074578   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:07.217506   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:07.218030   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:07.218133   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:09.813495   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:09.813663   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:09.820592   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:21:09.821328   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:21:09.821328   14108 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 19:21:12.033597   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 19:21:12.033597   14108 machine.go:97] duration metric: took 45.8113658s to provisionDockerMachine
	I0429 19:21:12.033597   14108 client.go:171] duration metric: took 1m56.2089575s to LocalClient.Create
	I0429 19:21:12.033597   14108 start.go:167] duration metric: took 1m56.2089575s to libmachine.API.Create "ha-513500"
	I0429 19:21:12.034179   14108 start.go:293] postStartSetup for "ha-513500" (driver="hyperv")
	I0429 19:21:12.034179   14108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:21:12.045874   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:21:12.045874   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:14.173354   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:14.173500   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:14.173577   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:16.757120   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:16.757251   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:16.757892   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:21:16.874631   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8286172s)
	I0429 19:21:16.889137   14108 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:21:16.901704   14108 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:21:16.901862   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 19:21:16.902539   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 19:21:16.904359   14108 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 19:21:16.904480   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 19:21:16.920436   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:21:16.941353   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 19:21:16.990264   14108 start.go:296] duration metric: took 4.9560505s for postStartSetup
	I0429 19:21:16.993819   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:19.118124   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:19.118403   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:19.118403   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:21.676206   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:21.676550   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:21.676738   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:21:21.679851   14108 start.go:128] duration metric: took 2m5.8597325s to createHost
	I0429 19:21:21.679934   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:23.812228   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:23.812228   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:23.812228   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:26.375058   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:26.375559   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:26.381728   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:21:26.382270   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:21:26.382270   14108 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:21:26.510156   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418486.510636075
	
	I0429 19:21:26.510156   14108 fix.go:216] guest clock: 1714418486.510636075
	I0429 19:21:26.510156   14108 fix.go:229] Guest: 2024-04-29 19:21:26.510636075 +0000 UTC Remote: 2024-04-29 19:21:21.6798513 +0000 UTC m=+131.612114101 (delta=4.830784775s)
	I0429 19:21:26.510156   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:28.650272   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:28.650572   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:28.650572   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:31.267724   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:31.267898   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:31.275157   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:21:31.275835   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.240.42 22 <nil> <nil>}
	I0429 19:21:31.275884   14108 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714418486
	I0429 19:21:31.425770   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 19:21:26 UTC 2024
	
	I0429 19:21:31.425837   14108 fix.go:236] clock set: Mon Apr 29 19:21:26 UTC 2024
	 (err=<nil>)
	I0429 19:21:31.425837   14108 start.go:83] releasing machines lock for "ha-513500", held for 2m15.6063439s
	I0429 19:21:31.426110   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:33.605823   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:33.605823   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:33.605910   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:36.180062   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:36.180062   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:36.184523   14108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:21:36.184615   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:36.197374   14108 ssh_runner.go:195] Run: cat /version.json
	I0429 19:21:36.198387   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:21:38.323743   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:38.323743   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:38.323958   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:38.331721   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:21:38.331721   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:38.331721   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:21:41.002676   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:41.003681   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:41.004315   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:21:41.028036   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:21:41.028036   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:21:41.029326   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:21:41.164966   14108 ssh_runner.go:235] Completed: cat /version.json: (4.9675576s)
	I0429 19:21:41.164966   14108 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9804083s)
	I0429 19:21:41.180799   14108 ssh_runner.go:195] Run: systemctl --version
	I0429 19:21:41.203982   14108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:21:41.212207   14108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:21:41.225252   14108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:21:41.255874   14108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:21:41.255874   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:21:41.256251   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:21:41.309992   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 19:21:41.342790   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 19:21:41.361245   14108 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 19:21:41.374831   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 19:21:41.407402   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:21:41.447005   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 19:21:41.487513   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:21:41.523062   14108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:21:41.557006   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 19:21:41.592831   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 19:21:41.631741   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 19:21:41.667496   14108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:21:41.702083   14108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:21:41.736581   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:21:41.945761   14108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 19:21:41.982603   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:21:41.995896   14108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 19:21:42.037880   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:21:42.071132   14108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:21:42.120351   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:21:42.160312   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:21:42.198546   14108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 19:21:42.265144   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:21:42.289256   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:21:42.341563   14108 ssh_runner.go:195] Run: which cri-dockerd
	I0429 19:21:42.361428   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 19:21:42.380469   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 19:21:42.427129   14108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 19:21:42.640971   14108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 19:21:42.840349   14108 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 19:21:42.840671   14108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 19:21:42.890853   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:21:43.111882   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 19:21:45.708116   14108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5962161s)
	I0429 19:21:45.721198   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 19:21:45.762639   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:21:45.805458   14108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 19:21:46.023417   14108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 19:21:46.254513   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:21:46.474984   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 19:21:46.521315   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:21:46.563144   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:21:46.784080   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 19:21:46.908286   14108 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 19:21:46.921654   14108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 19:21:46.932302   14108 start.go:562] Will wait 60s for crictl version
	I0429 19:21:46.943667   14108 ssh_runner.go:195] Run: which crictl
	I0429 19:21:46.965242   14108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:21:47.031807   14108 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 19:21:47.042045   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:21:47.092589   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:21:47.169869   14108 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 19:21:47.170444   14108 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 19:21:47.174458   14108 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 19:21:47.175021   14108 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 19:21:47.175021   14108 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 19:21:47.175021   14108 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 19:21:47.177913   14108 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 19:21:47.177913   14108 ip.go:210] interface addr: 172.17.240.1/20
	I0429 19:21:47.193249   14108 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 19:21:47.201249   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:21:47.242307   14108 kubeadm.go:877] updating cluster {Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:21:47.242450   14108 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 19:21:47.255808   14108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 19:21:47.281163   14108 docker.go:685] Got preloaded images: 
	I0429 19:21:47.281256   14108 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 19:21:47.297948   14108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 19:21:47.330535   14108 ssh_runner.go:195] Run: which lz4
	I0429 19:21:47.337342   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 19:21:47.349898   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 19:21:47.358360   14108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 19:21:47.358616   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 19:21:49.119492   14108 docker.go:649] duration metric: took 1.7821379s to copy over tarball
	I0429 19:21:49.134247   14108 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 19:21:58.058678   14108 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9243688s)
	I0429 19:21:58.058678   14108 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 19:21:58.130090   14108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 19:21:58.153440   14108 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 19:21:58.199280   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:21:58.426469   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 19:22:01.858330   14108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.431837s)
	I0429 19:22:01.871686   14108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 19:22:01.897166   14108 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 19:22:01.897166   14108 cache_images.go:84] Images are preloaded, skipping loading
	I0429 19:22:01.897166   14108 kubeadm.go:928] updating node { 172.17.240.42 8443 v1.30.0 docker true true} ...
	I0429 19:22:01.897166   14108 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-513500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.240.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:22:01.908276   14108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 19:22:01.948140   14108 cni.go:84] Creating CNI manager for ""
	I0429 19:22:01.948234   14108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 19:22:01.948234   14108 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:22:01.948289   14108 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.240.42 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-513500 NodeName:ha-513500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.240.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.240.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 19:22:01.948359   14108 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.240.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-513500"
	  kubeletExtraArgs:
	    node-ip: 172.17.240.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.240.42"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:22:01.948359   14108 kube-vip.go:115] generating kube-vip config ...
	I0429 19:22:01.962452   14108 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:22:01.995097   14108 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:22:01.995332   14108 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:22:02.012243   14108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:22:02.032079   14108 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:22:02.047246   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 19:22:02.069559   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0429 19:22:02.105306   14108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:22:02.143052   14108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0429 19:22:02.177699   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0429 19:22:02.224654   14108 ssh_runner.go:195] Run: grep 172.17.255.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:22:02.231752   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:22:02.269800   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:22:02.478831   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:22:02.515240   14108 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500 for IP: 172.17.240.42
	I0429 19:22:02.515240   14108 certs.go:194] generating shared ca certs ...
	I0429 19:22:02.515240   14108 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:02.516300   14108 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 19:22:02.516649   14108 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 19:22:02.516916   14108 certs.go:256] generating profile certs ...
	I0429 19:22:02.517563   14108 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.key
	I0429 19:22:02.517752   14108 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.crt with IP's: []
	I0429 19:22:02.651407   14108 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.crt ...
	I0429 19:22:02.652426   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.crt: {Name:mk5210789812ded2c429974ce014fe11cc92a699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:02.653895   14108 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.key ...
	I0429 19:22:02.653895   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.key: {Name:mk6113744d78fd6e93c7abad85557d1bc9ea4511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:02.654430   14108 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.6e29dd6d
	I0429 19:22:02.654430   14108 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.6e29dd6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.240.42 172.17.255.254]
	I0429 19:22:02.895592   14108 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.6e29dd6d ...
	I0429 19:22:02.895592   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.6e29dd6d: {Name:mkefbdf7c45d1d40d9809f8e3a48ec166982cc2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:02.897668   14108 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.6e29dd6d ...
	I0429 19:22:02.897668   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.6e29dd6d: {Name:mk6e50931413457f4c849441f1a52a798c4a39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:02.898759   14108 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.6e29dd6d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt
	I0429 19:22:02.911701   14108 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.6e29dd6d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key
	I0429 19:22:02.912612   14108 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key
	I0429 19:22:02.912612   14108 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt with IP's: []
	I0429 19:22:03.085370   14108 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt ...
	I0429 19:22:03.085370   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt: {Name:mk1836f19c366a42bc69dbe804cf2f6504d32531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:03.085964   14108 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key ...
	I0429 19:22:03.085964   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key: {Name:mkb633fcea49aec8fa95bf997683078363622fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:03.087151   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:22:03.088109   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:22:03.088292   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:22:03.088469   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:22:03.088668   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:22:03.088818   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:22:03.088982   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:22:03.098045   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:22:03.099024   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 19:22:03.099224   14108 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 19:22:03.099419   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 19:22:03.099419   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 19:22:03.099419   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 19:22:03.100109   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 19:22:03.100452   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 19:22:03.100452   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 19:22:03.101033   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:22:03.101189   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 19:22:03.102547   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:22:03.167021   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 19:22:03.222763   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:22:03.275180   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:22:03.326748   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 19:22:03.376175   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 19:22:03.421660   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:22:03.464456   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 19:22:03.513935   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 19:22:03.567361   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:22:03.627448   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 19:22:03.683367   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:22:03.731739   14108 ssh_runner.go:195] Run: openssl version
	I0429 19:22:03.755665   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 19:22:03.795635   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 19:22:03.804983   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 19:22:03.819082   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 19:22:03.844317   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:22:03.880455   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:22:03.916478   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:22:03.923628   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:22:03.939033   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:22:03.963507   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:22:03.999626   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 19:22:04.034853   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 19:22:04.041310   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 19:22:04.056279   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 19:22:04.082562   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 19:22:04.116996   14108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:22:04.124404   14108 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:22:04.124404   14108 kubeadm.go:391] StartCluster: {Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:22:04.136181   14108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 19:22:04.173513   14108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 19:22:04.214952   14108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 19:22:04.255841   14108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 19:22:04.276434   14108 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 19:22:04.276434   14108 kubeadm.go:156] found existing configuration files:
	
	I0429 19:22:04.293023   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 19:22:04.312938   14108 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 19:22:04.326258   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 19:22:04.359216   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 19:22:04.379021   14108 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 19:22:04.391884   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 19:22:04.424241   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 19:22:04.442153   14108 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 19:22:04.457711   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 19:22:04.490976   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 19:22:04.510981   14108 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 19:22:04.527622   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 19:22:04.546429   14108 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 19:22:05.075277   14108 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 19:22:19.880857   14108 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 19:22:19.880985   14108 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 19:22:19.881209   14108 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 19:22:19.881461   14108 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 19:22:19.881461   14108 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 19:22:19.881461   14108 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 19:22:19.886543   14108 out.go:204]   - Generating certificates and keys ...
	I0429 19:22:19.886713   14108 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 19:22:19.886713   14108 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 19:22:19.886713   14108 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 19:22:19.887361   14108 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 19:22:19.887544   14108 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 19:22:19.887544   14108 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 19:22:19.887544   14108 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 19:22:19.887544   14108 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-513500 localhost] and IPs [172.17.240.42 127.0.0.1 ::1]
	I0429 19:22:19.888152   14108 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 19:22:19.888291   14108 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-513500 localhost] and IPs [172.17.240.42 127.0.0.1 ::1]
	I0429 19:22:19.888291   14108 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 19:22:19.888291   14108 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 19:22:19.888291   14108 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 19:22:19.888874   14108 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 19:22:19.888924   14108 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 19:22:19.888924   14108 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 19:22:19.888924   14108 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 19:22:19.888924   14108 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 19:22:19.889486   14108 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 19:22:19.889565   14108 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 19:22:19.889565   14108 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 19:22:19.892055   14108 out.go:204]   - Booting up control plane ...
	I0429 19:22:19.892383   14108 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 19:22:19.892600   14108 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 19:22:19.892800   14108 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 19:22:19.892800   14108 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 19:22:19.892800   14108 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 19:22:19.892800   14108 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 19:22:19.893522   14108 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 19:22:19.893522   14108 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 19:22:19.893860   14108 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002752859s
	I0429 19:22:19.893938   14108 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 19:22:19.893938   14108 kubeadm.go:309] [api-check] The API server is healthy after 8.85459715s
	I0429 19:22:19.893938   14108 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 19:22:19.895147   14108 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 19:22:19.895147   14108 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 19:22:19.895401   14108 kubeadm.go:309] [mark-control-plane] Marking the node ha-513500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 19:22:19.895401   14108 kubeadm.go:309] [bootstrap-token] Using token: ljuqwa.dibj2v5bire23t8b
	I0429 19:22:19.898780   14108 out.go:204]   - Configuring RBAC rules ...
	I0429 19:22:19.899397   14108 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 19:22:19.899397   14108 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 19:22:19.899397   14108 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 19:22:19.899978   14108 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 19:22:19.899978   14108 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 19:22:19.899978   14108 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 19:22:19.900802   14108 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 19:22:19.900802   14108 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 19:22:19.901010   14108 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 19:22:19.901010   14108 kubeadm.go:309] 
	I0429 19:22:19.901236   14108 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 19:22:19.901309   14108 kubeadm.go:309] 
	I0429 19:22:19.901488   14108 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 19:22:19.901542   14108 kubeadm.go:309] 
	I0429 19:22:19.901542   14108 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 19:22:19.901733   14108 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 19:22:19.901733   14108 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 19:22:19.901733   14108 kubeadm.go:309] 
	I0429 19:22:19.902018   14108 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 19:22:19.902018   14108 kubeadm.go:309] 
	I0429 19:22:19.902114   14108 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 19:22:19.902114   14108 kubeadm.go:309] 
	I0429 19:22:19.902114   14108 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 19:22:19.902114   14108 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 19:22:19.902670   14108 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 19:22:19.902670   14108 kubeadm.go:309] 
	I0429 19:22:19.902840   14108 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 19:22:19.902976   14108 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 19:22:19.902976   14108 kubeadm.go:309] 
	I0429 19:22:19.902976   14108 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ljuqwa.dibj2v5bire23t8b \
	I0429 19:22:19.902976   14108 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 19:22:19.903664   14108 kubeadm.go:309] 	--control-plane 
	I0429 19:22:19.903664   14108 kubeadm.go:309] 
	I0429 19:22:19.903832   14108 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 19:22:19.904071   14108 kubeadm.go:309] 
	I0429 19:22:19.904384   14108 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ljuqwa.dibj2v5bire23t8b \
	I0429 19:22:19.904571   14108 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 19:22:19.904571   14108 cni.go:84] Creating CNI manager for ""
	I0429 19:22:19.904571   14108 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 19:22:19.909581   14108 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 19:22:19.927899   14108 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 19:22:19.936766   14108 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 19:22:19.936766   14108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 19:22:19.992085   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 19:22:20.662675   14108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 19:22:20.676671   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-513500 minikube.k8s.io/updated_at=2024_04_29T19_22_20_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-513500 minikube.k8s.io/primary=true
	I0429 19:22:20.676671   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:20.683259   14108 ops.go:34] apiserver oom_adj: -16
	I0429 19:22:20.976706   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:21.480112   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:21.984150   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:22.490723   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:22.978520   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:23.478414   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:23.981472   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:24.483278   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:24.985734   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:25.486068   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:25.988619   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:26.477862   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:26.984274   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:27.488001   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:27.988895   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:28.476719   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:28.983088   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:29.490792   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:29.984863   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:30.482257   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:30.983218   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:31.490342   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:31.980088   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:32.489634   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 19:22:32.674523   14108 kubeadm.go:1107] duration metric: took 12.0117638s to wait for elevateKubeSystemPrivileges
	W0429 19:22:32.674711   14108 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 19:22:32.674711   14108 kubeadm.go:393] duration metric: took 28.5501076s to StartCluster
	I0429 19:22:32.674711   14108 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:32.674924   14108 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:22:32.676426   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:22:32.678543   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 19:22:32.678744   14108 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:22:32.678869   14108 start.go:240] waiting for startup goroutines ...
	I0429 19:22:32.678988   14108 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 19:22:32.678988   14108 addons.go:69] Setting storage-provisioner=true in profile "ha-513500"
	I0429 19:22:32.678988   14108 addons.go:234] Setting addon storage-provisioner=true in "ha-513500"
	I0429 19:22:32.678988   14108 addons.go:69] Setting default-storageclass=true in profile "ha-513500"
	I0429 19:22:32.678988   14108 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-513500"
	I0429 19:22:32.678988   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:22:32.678988   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:22:32.679890   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:22:32.679890   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:22:32.878317   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 19:22:33.335459   14108 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 19:22:35.018264   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:22:35.018264   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:35.021042   14108 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:22:35.023485   14108 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 19:22:35.023485   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 19:22:35.023485   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:22:35.038702   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:22:35.038702   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:35.039729   14108 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:22:35.039729   14108 kapi.go:59] client config for ha-513500: &rest.Config{Host:"https://172.17.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 19:22:35.043721   14108 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 19:22:35.044711   14108 addons.go:234] Setting addon default-storageclass=true in "ha-513500"
	I0429 19:22:35.046690   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:22:35.047690   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:22:37.327113   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:22:37.327113   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:37.327113   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:22:37.337553   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:22:37.337602   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:37.337669   14108 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 19:22:37.337669   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 19:22:37.337669   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:22:39.586787   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:22:39.587710   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:39.587710   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:22:40.058636   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:22:40.058681   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:40.058736   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:22:40.235973   14108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 19:22:42.286299   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:22:42.286587   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:42.287007   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:22:42.447645   14108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 19:22:42.634785   14108 round_trippers.go:463] GET https://172.17.255.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 19:22:42.634905   14108 round_trippers.go:469] Request Headers:
	I0429 19:22:42.634905   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:22:42.634905   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:22:42.649420   14108 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 19:22:42.650493   14108 round_trippers.go:463] PUT https://172.17.255.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 19:22:42.650493   14108 round_trippers.go:469] Request Headers:
	I0429 19:22:42.650493   14108 round_trippers.go:473]     Content-Type: application/json
	I0429 19:22:42.650493   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:22:42.650493   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:22:42.654996   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:22:42.661067   14108 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 19:22:42.665184   14108 addons.go:505] duration metric: took 9.9861584s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 19:22:42.665184   14108 start.go:245] waiting for cluster config update ...
	I0429 19:22:42.665184   14108 start.go:254] writing updated cluster config ...
	I0429 19:22:42.670950   14108 out.go:177] 
	I0429 19:22:42.682225   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:22:42.682225   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:22:42.687758   14108 out.go:177] * Starting "ha-513500-m02" control-plane node in "ha-513500" cluster
	I0429 19:22:42.693746   14108 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 19:22:42.693746   14108 cache.go:56] Caching tarball of preloaded images
	I0429 19:22:42.694820   14108 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 19:22:42.694820   14108 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 19:22:42.695104   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:22:42.696728   14108 start.go:360] acquireMachinesLock for ha-513500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:22:42.696728   14108 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-513500-m02"
	I0429 19:22:42.697724   14108 start.go:93] Provisioning new machine with config: &{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:22:42.697724   14108 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 19:22:42.702727   14108 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 19:22:42.703790   14108 start.go:159] libmachine.API.Create for "ha-513500" (driver="hyperv")
	I0429 19:22:42.703790   14108 client.go:168] LocalClient.Create starting
	I0429 19:22:42.703969   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 19:22:42.703969   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:22:42.704478   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:22:42.704478   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 19:22:42.704478   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:22:42.704849   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:22:42.704849   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 19:22:44.694752   14108 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 19:22:44.695727   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:44.696027   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 19:22:46.515411   14108 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 19:22:46.515411   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:46.515936   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:22:48.067421   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:22:48.067421   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:48.067421   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:22:51.776981   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:22:51.777187   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:51.779781   14108 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:22:52.351441   14108 main.go:141] libmachine: Creating SSH key...
	I0429 19:22:53.016072   14108 main.go:141] libmachine: Creating VM...
	I0429 19:22:53.016072   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:22:55.977376   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:22:55.977451   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:55.977508   14108 main.go:141] libmachine: Using switch "Default Switch"
	I0429 19:22:55.977508   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:22:57.860081   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:22:57.860741   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:22:57.860806   14108 main.go:141] libmachine: Creating VHD
	I0429 19:22:57.860881   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 19:23:01.682173   14108 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 90435D32-26F0-487A-9FE2-FF887D35579A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 19:23:01.682173   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:01.683147   14108 main.go:141] libmachine: Writing magic tar header
	I0429 19:23:01.683147   14108 main.go:141] libmachine: Writing SSH key tar header
	I0429 19:23:01.694392   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 19:23:04.924738   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:04.925558   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:04.925558   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\disk.vhd' -SizeBytes 20000MB
	I0429 19:23:07.474439   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:07.475027   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:07.475191   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-513500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 19:23:11.201271   14108 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-513500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 19:23:11.201271   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:11.202071   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-513500-m02 -DynamicMemoryEnabled $false
	I0429 19:23:13.442056   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:13.442056   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:13.442727   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-513500-m02 -Count 2
	I0429 19:23:15.631699   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:15.631864   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:15.632028   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-513500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\boot2docker.iso'
	I0429 19:23:18.227622   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:18.228662   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:18.228662   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-513500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\disk.vhd'
	I0429 19:23:20.933469   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:20.934234   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:20.934234   14108 main.go:141] libmachine: Starting VM...
	I0429 19:23:20.934398   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-513500-m02
	I0429 19:23:24.134056   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:24.134056   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:24.134056   14108 main.go:141] libmachine: Waiting for host to start...
	I0429 19:23:24.134693   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:26.403244   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:26.403244   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:26.404156   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:29.026385   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:29.026385   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:30.038616   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:32.301643   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:32.301643   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:32.301746   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:34.934496   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:34.935483   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:35.948540   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:38.130552   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:38.130552   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:38.130656   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:40.689798   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:40.690796   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:41.705793   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:43.910174   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:43.910174   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:43.911230   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:46.498788   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:23:46.498788   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:47.505991   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:49.726197   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:49.727107   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:49.727277   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:52.377408   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:23:52.377548   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:52.377657   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:54.533125   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:54.533125   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:54.533452   14108 machine.go:94] provisionDockerMachine start ...
	I0429 19:23:54.533514   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:23:56.724704   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:23:56.724704   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:56.725033   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:23:59.373495   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:23:59.373495   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:23:59.381479   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:23:59.393846   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:23:59.393846   14108 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:23:59.516341   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 19:23:59.516341   14108 buildroot.go:166] provisioning hostname "ha-513500-m02"
	I0429 19:23:59.516713   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:01.717814   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:01.718799   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:01.718876   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:04.352439   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:04.352439   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:04.358740   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:04.359729   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:04.359729   14108 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-513500-m02 && echo "ha-513500-m02" | sudo tee /etc/hostname
	I0429 19:24:04.518886   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-513500-m02
	
	I0429 19:24:04.518886   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:06.695571   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:06.695571   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:06.695774   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:09.302859   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:09.302859   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:09.309047   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:09.309954   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:09.309954   14108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:24:09.449306   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:24:09.449369   14108 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 19:24:09.449432   14108 buildroot.go:174] setting up certificates
	I0429 19:24:09.449432   14108 provision.go:84] configureAuth start
	I0429 19:24:09.449551   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:11.611204   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:11.611204   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:11.611204   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:14.204124   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:14.204181   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:14.204181   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:16.394053   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:16.394053   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:16.394053   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:18.996611   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:18.996611   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:18.996611   14108 provision.go:143] copyHostCerts
	I0429 19:24:18.996611   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 19:24:18.997148   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 19:24:18.997284   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 19:24:18.997636   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 19:24:18.999199   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 19:24:18.999482   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 19:24:18.999579   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 19:24:19.000059   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 19:24:19.001109   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 19:24:19.001440   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 19:24:19.001440   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 19:24:19.001833   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 19:24:19.002946   14108 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-513500-m02 san=[127.0.0.1 172.17.247.146 ha-513500-m02 localhost minikube]
	I0429 19:24:19.569474   14108 provision.go:177] copyRemoteCerts
	I0429 19:24:19.583290   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:24:19.584045   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:21.725443   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:21.725443   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:21.726425   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:24.321726   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:24.321887   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:24.322415   14108 sshutil.go:53] new ssh client: &{IP:172.17.247.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\id_rsa Username:docker}
	I0429 19:24:24.433052   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.849725s)
	I0429 19:24:24.433244   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 19:24:24.433334   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 19:24:24.487049   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 19:24:24.487587   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 19:24:24.544495   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 19:24:24.545574   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:24:24.595058   14108 provision.go:87] duration metric: took 15.1455123s to configureAuth
	I0429 19:24:24.595150   14108 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:24:24.595875   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:24:24.595976   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:26.753046   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:26.753812   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:26.753886   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:29.439516   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:29.439516   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:29.446167   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:29.446469   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:29.446469   14108 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 19:24:29.579155   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 19:24:29.579258   14108 buildroot.go:70] root file system type: tmpfs
	I0429 19:24:29.579341   14108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 19:24:29.579341   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:31.770583   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:31.770583   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:31.770866   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:34.426910   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:34.426910   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:34.433550   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:34.433780   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:34.433780   14108 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.240.42"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 19:24:34.593721   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.240.42
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 19:24:34.594004   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:36.774062   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:36.774154   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:36.774154   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:39.416973   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:39.417065   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:39.425073   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:39.425963   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:39.425963   14108 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 19:24:41.683825   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 19:24:41.684360   14108 machine.go:97] duration metric: took 47.1505525s to provisionDockerMachine
	I0429 19:24:41.684403   14108 client.go:171] duration metric: took 1m58.9796893s to LocalClient.Create
	I0429 19:24:41.684403   14108 start.go:167] duration metric: took 1m58.9797324s to libmachine.API.Create "ha-513500"
	I0429 19:24:41.684451   14108 start.go:293] postStartSetup for "ha-513500-m02" (driver="hyperv")
	I0429 19:24:41.684492   14108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:24:41.698260   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:24:41.698260   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:43.817338   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:43.817394   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:43.817394   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:46.424449   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:46.425044   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:46.425565   14108 sshutil.go:53] new ssh client: &{IP:172.17.247.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\id_rsa Username:docker}
	I0429 19:24:46.536416   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8381192s)
	I0429 19:24:46.548421   14108 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:24:46.556456   14108 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:24:46.556584   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 19:24:46.557002   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 19:24:46.558160   14108 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 19:24:46.558160   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 19:24:46.571181   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:24:46.591542   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 19:24:46.641012   14108 start.go:296] duration metric: took 4.9565237s for postStartSetup
	I0429 19:24:46.643791   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:48.749521   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:48.749521   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:48.749521   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:51.437386   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:51.437386   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:51.437588   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:24:51.440618   14108 start.go:128] duration metric: took 2m8.7419384s to createHost
	I0429 19:24:51.440649   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:53.587469   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:53.588062   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:53.588137   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:24:56.212067   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:24:56.213101   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:56.219778   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:24:56.220420   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:24:56.220557   14108 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:24:56.344391   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418696.353753957
	
	I0429 19:24:56.344391   14108 fix.go:216] guest clock: 1714418696.353753957
	I0429 19:24:56.344391   14108 fix.go:229] Guest: 2024-04-29 19:24:56.353753957 +0000 UTC Remote: 2024-04-29 19:24:51.4406499 +0000 UTC m=+341.371390601 (delta=4.913104057s)
	I0429 19:24:56.344566   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:24:58.475678   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:24:58.475678   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:24:58.475678   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:01.110386   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:25:01.110932   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:01.116677   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:25:01.117227   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.247.146 22 <nil> <nil>}
	I0429 19:25:01.117311   14108 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714418696
	I0429 19:25:01.273062   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 19:24:56 UTC 2024
	
	I0429 19:25:01.273118   14108 fix.go:236] clock set: Mon Apr 29 19:24:56 UTC 2024
	 (err=<nil>)
	I0429 19:25:01.273118   14108 start.go:83] releasing machines lock for "ha-513500-m02", held for 2m18.5753683s
	I0429 19:25:01.273335   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:25:03.462278   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:03.462402   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:03.462402   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:06.086664   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:25:06.086664   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:06.090069   14108 out.go:177] * Found network options:
	I0429 19:25:06.092703   14108 out.go:177]   - NO_PROXY=172.17.240.42
	W0429 19:25:06.095186   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:25:06.097627   14108 out.go:177]   - NO_PROXY=172.17.240.42
	W0429 19:25:06.100032   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:25:06.101397   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:25:06.103891   14108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:25:06.104054   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:25:06.120702   14108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 19:25:06.120702   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m02 ).state
	I0429 19:25:08.312868   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:08.313731   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:08.313731   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:08.336798   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:08.336798   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:08.337600   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:11.025171   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:25:11.026226   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:11.026296   14108 sshutil.go:53] new ssh client: &{IP:172.17.247.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\id_rsa Username:docker}
	I0429 19:25:11.047745   14108 main.go:141] libmachine: [stdout =====>] : 172.17.247.146
	
	I0429 19:25:11.047745   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:11.048725   14108 sshutil.go:53] new ssh client: &{IP:172.17.247.146 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m02\id_rsa Username:docker}
	I0429 19:25:11.185386   14108 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0646475s)
	I0429 19:25:11.185506   14108 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.081578s)
	W0429 19:25:11.185506   14108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:25:11.198871   14108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:25:11.230596   14108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:25:11.231462   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:25:11.231620   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:25:11.283701   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 19:25:11.319389   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 19:25:11.342130   14108 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 19:25:11.355659   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 19:25:11.393063   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:25:11.432731   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 19:25:11.468686   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:25:11.503040   14108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:25:11.539336   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 19:25:11.574787   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 19:25:11.610038   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 19:25:11.643730   14108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:25:11.680657   14108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:25:11.714656   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:25:11.935177   14108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 19:25:11.970968   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:25:11.984646   14108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 19:25:12.023433   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:25:12.064882   14108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:25:12.112596   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:25:12.153528   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:25:12.195975   14108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 19:25:12.267622   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:25:12.295194   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:25:12.348928   14108 ssh_runner.go:195] Run: which cri-dockerd
	I0429 19:25:12.371080   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 19:25:12.393539   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 19:25:12.443495   14108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 19:25:12.671827   14108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 19:25:12.875804   14108 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 19:25:12.875869   14108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 19:25:12.926479   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:25:13.142509   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 19:25:15.731373   14108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.588724s)
	I0429 19:25:15.745518   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 19:25:15.787901   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:25:15.825831   14108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 19:25:16.046597   14108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 19:25:16.283031   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:25:16.508407   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 19:25:16.554913   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:25:16.593957   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:25:16.825578   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 19:25:16.960164   14108 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 19:25:16.973239   14108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 19:25:16.983680   14108 start.go:562] Will wait 60s for crictl version
	I0429 19:25:16.996395   14108 ssh_runner.go:195] Run: which crictl
	I0429 19:25:17.017470   14108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:25:17.074114   14108 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 19:25:17.083121   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:25:17.135123   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:25:17.172515   14108 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 19:25:17.177358   14108 out.go:177]   - env NO_PROXY=172.17.240.42
	I0429 19:25:17.181475   14108 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 19:25:17.186699   14108 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 19:25:17.186824   14108 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 19:25:17.186824   14108 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 19:25:17.186824   14108 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 19:25:17.191425   14108 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 19:25:17.191487   14108 ip.go:210] interface addr: 172.17.240.1/20
	I0429 19:25:17.211794   14108 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 19:25:17.219017   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:25:17.246348   14108 mustload.go:65] Loading cluster: ha-513500
	I0429 19:25:17.246855   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:25:17.248076   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:25:19.378111   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:19.378111   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:19.378699   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:25:19.379360   14108 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500 for IP: 172.17.247.146
	I0429 19:25:19.379360   14108 certs.go:194] generating shared ca certs ...
	I0429 19:25:19.379488   14108 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:25:19.380095   14108 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 19:25:19.380470   14108 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 19:25:19.380596   14108 certs.go:256] generating profile certs ...
	I0429 19:25:19.381215   14108 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.key
	I0429 19:25:19.381443   14108 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.1f34c545
	I0429 19:25:19.381548   14108 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.1f34c545 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.240.42 172.17.247.146 172.17.255.254]
	I0429 19:25:19.755547   14108 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.1f34c545 ...
	I0429 19:25:19.755547   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.1f34c545: {Name:mk271c88acfc6db25bfab47fbc94e7bcf34e85a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:25:19.757371   14108 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.1f34c545 ...
	I0429 19:25:19.757371   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.1f34c545: {Name:mke9a46b3c4416c9e568a9bbc772920966068d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:25:19.757884   14108 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.1f34c545 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt
	I0429 19:25:19.769937   14108 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.1f34c545 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key
	I0429 19:25:19.770967   14108 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key
	I0429 19:25:19.770967   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:25:19.770967   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:25:19.772397   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:25:19.772711   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:25:19.772711   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:25:19.773016   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:25:19.773016   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:25:19.773555   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:25:19.774130   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 19:25:19.774161   14108 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 19:25:19.774161   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 19:25:19.774890   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 19:25:19.775531   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 19:25:19.775531   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 19:25:19.776464   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 19:25:19.776464   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:25:19.776464   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 19:25:19.776998   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 19:25:19.777434   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:25:21.914884   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:21.914884   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:21.915703   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:24.531839   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:25:24.531897   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:24.531897   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:25:24.628583   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 19:25:24.636161   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 19:25:24.676521   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 19:25:24.684455   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0429 19:25:24.720397   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 19:25:24.728704   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 19:25:24.767379   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 19:25:24.774772   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 19:25:24.811910   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 19:25:24.820997   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 19:25:24.856129   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 19:25:24.863017   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 19:25:24.884075   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:25:24.939258   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 19:25:24.991498   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:25:25.044226   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:25:25.097875   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 19:25:25.150479   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0429 19:25:25.211064   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:25:25.267869   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 19:25:25.322930   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:25:25.378424   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 19:25:25.430950   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 19:25:25.481934   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 19:25:25.517248   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0429 19:25:25.556841   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 19:25:25.592728   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 19:25:25.627995   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 19:25:25.662790   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 19:25:25.697158   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 19:25:25.745194   14108 ssh_runner.go:195] Run: openssl version
	I0429 19:25:25.764761   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:25:25.798355   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:25:25.807673   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:25:25.821435   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:25:25.844480   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:25:25.880565   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 19:25:25.914326   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 19:25:25.921754   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 19:25:25.935357   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 19:25:25.956936   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 19:25:25.994271   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 19:25:26.031400   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 19:25:26.040323   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 19:25:26.054078   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 19:25:26.077741   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:25:26.124699   14108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:25:26.132687   14108 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:25:26.133013   14108 kubeadm.go:928] updating node {m02 172.17.247.146 8443 v1.30.0 docker true true} ...
	I0429 19:25:26.133209   14108 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-513500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.247.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:25:26.133209   14108 kube-vip.go:115] generating kube-vip config ...
	I0429 19:25:26.145834   14108 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:25:26.175290   14108 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:25:26.175397   14108 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:25:26.189293   14108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:25:26.218094   14108 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 19:25:26.231878   14108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 19:25:26.255583   14108 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0429 19:25:26.256086   14108 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0429 19:25:26.256173   14108 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0429 19:25:27.404293   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:25:27.419561   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:25:27.428010   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 19:25:27.428097   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 19:25:28.762151   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:25:28.781361   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:25:28.789528   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 19:25:28.789758   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 19:25:30.770435   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:25:30.798246   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:25:30.811970   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:25:30.821250   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 19:25:30.821537   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 19:25:31.630846   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 19:25:31.654971   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0429 19:25:31.692128   14108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:25:31.734693   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0429 19:25:31.787923   14108 ssh_runner.go:195] Run: grep 172.17.255.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:25:31.794797   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:25:31.839541   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:25:32.071289   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:25:32.102570   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:25:32.103394   14108 start.go:316] joinCluster: &{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.247.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:25:32.103510   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 19:25:32.103510   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:25:34.232794   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:25:34.232794   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:34.233820   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:25:36.914959   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:25:36.915836   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:25:36.916315   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:25:37.137071   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0330421s)
	I0429 19:25:37.137242   14108 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.247.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:25:37.137341   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0yzlj.jap8zibl02aqm219 --discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-513500-m02 --control-plane --apiserver-advertise-address=172.17.247.146 --apiserver-bind-port=8443"
	I0429 19:26:23.608723   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0yzlj.jap8zibl02aqm219 --discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-513500-m02 --control-plane --apiserver-advertise-address=172.17.247.146 --apiserver-bind-port=8443": (46.4710201s)
	I0429 19:26:23.608723   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 19:26:24.504734   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-513500-m02 minikube.k8s.io/updated_at=2024_04_29T19_26_24_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-513500 minikube.k8s.io/primary=false
	I0429 19:26:24.698086   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-513500-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 19:26:24.897813   14108 start.go:318] duration metric: took 52.7940082s to joinCluster
	I0429 19:26:24.897813   14108 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.247.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:26:24.903363   14108 out.go:177] * Verifying Kubernetes components...
	I0429 19:26:24.899034   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:26:24.920973   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:26:25.400157   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:26:25.431717   14108 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:26:25.432198   14108 kapi.go:59] client config for ha-513500: &rest.Config{Host:"https://172.17.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 19:26:25.432198   14108 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.255.254:8443 with https://172.17.240.42:8443
	I0429 19:26:25.433570   14108 node_ready.go:35] waiting up to 6m0s for node "ha-513500-m02" to be "Ready" ...
	I0429 19:26:25.433570   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:25.433570   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:25.433570   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:25.433570   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:25.450402   14108 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 19:26:25.938808   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:25.938890   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:25.938890   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:25.938890   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:25.946484   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:26:26.445084   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:26.445148   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:26.445148   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:26.445224   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:26.453753   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:26:26.935966   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:26.936079   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:26.936079   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:26.936135   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:26.944118   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:26:27.441818   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:27.441818   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:27.442180   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:27.442180   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:27.804807   14108 round_trippers.go:574] Response Status: 200 OK in 362 milliseconds
	I0429 19:26:27.805323   14108 node_ready.go:53] node "ha-513500-m02" has status "Ready":"False"
	I0429 19:26:27.947326   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:27.947326   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:27.947326   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:27.947326   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:27.952626   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:28.438670   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:28.438670   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:28.438670   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:28.438670   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:28.444472   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:28.945844   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:28.945844   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:28.945844   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:28.945969   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:28.974028   14108 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0429 19:26:29.440738   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:29.441035   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:29.441035   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:29.441035   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:29.446433   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:29.934012   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:29.934012   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:29.934012   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:29.934012   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:29.940479   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:29.941450   14108 node_ready.go:53] node "ha-513500-m02" has status "Ready":"False"
	I0429 19:26:30.445672   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:30.468037   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:30.468037   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:30.468037   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:30.473265   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:30.935997   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:30.936108   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:30.936108   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:30.936108   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:30.940845   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:31.441278   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:31.441365   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:31.441365   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:31.441365   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:31.446106   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:31.947541   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:31.947541   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:31.947541   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:31.947541   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:31.953177   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:31.954702   14108 node_ready.go:53] node "ha-513500-m02" has status "Ready":"False"
	I0429 19:26:32.434284   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:32.434630   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:32.434630   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:32.434703   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:32.440350   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:32.948445   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:32.948445   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:32.948445   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:32.948445   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:32.954059   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:33.435971   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:33.435971   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:33.435971   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:33.435971   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:33.440574   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:33.945943   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:33.945943   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:33.945943   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:33.945943   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:33.952625   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:34.434132   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:34.434132   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:34.434132   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:34.434132   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:34.443981   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:26:34.444993   14108 node_ready.go:53] node "ha-513500-m02" has status "Ready":"False"
	I0429 19:26:34.935559   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:34.935882   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:34.935882   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:34.935882   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:34.941936   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:35.436593   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:35.436593   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:35.436593   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:35.436593   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:35.442480   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:35.936507   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:35.936507   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:35.936683   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:35.936683   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:35.941247   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:36.439202   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:36.439202   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:36.439202   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:36.439202   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:36.446067   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:36.446977   14108 node_ready.go:53] node "ha-513500-m02" has status "Ready":"False"
	I0429 19:26:36.939470   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:36.939470   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:36.939470   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:36.939470   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:36.944469   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.446898   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:37.446898   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.446898   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.446898   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.452445   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:37.453450   14108 node_ready.go:49] node "ha-513500-m02" has status "Ready":"True"
	I0429 19:26:37.453450   14108 node_ready.go:38] duration metric: took 12.0197862s for node "ha-513500-m02" to be "Ready" ...
	I0429 19:26:37.453450   14108 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:26:37.454056   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:26:37.454056   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.454056   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.454056   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.461346   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:26:37.472695   14108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.472695   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5jxcm
	I0429 19:26:37.472695   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.473243   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.473243   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.477024   14108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:26:37.478550   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:37.478626   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.478626   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.478626   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.483051   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.484036   14108 pod_ready.go:92] pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:37.484159   14108 pod_ready.go:81] duration metric: took 11.4647ms for pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.484159   14108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.484300   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n22jn
	I0429 19:26:37.484300   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.484300   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.484300   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.489039   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.489825   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:37.489873   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.489873   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.489873   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.493874   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.495021   14108 pod_ready.go:92] pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:37.495021   14108 pod_ready.go:81] duration metric: took 10.8612ms for pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.495021   14108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.495274   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500
	I0429 19:26:37.495274   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.495274   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.495274   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.500250   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.501214   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:37.501214   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.501214   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.501214   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.506208   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:37.507408   14108 pod_ready.go:92] pod "etcd-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:37.507408   14108 pod_ready.go:81] duration metric: took 12.3868ms for pod "etcd-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.507408   14108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.507408   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m02
	I0429 19:26:37.507408   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.507408   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.507408   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.513055   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:37.514257   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:37.514257   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.514828   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.514828   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.518524   14108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:26:37.518524   14108 pod_ready.go:92] pod "etcd-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:37.518524   14108 pod_ready.go:81] duration metric: took 11.1166ms for pod "etcd-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.518524   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.651990   14108 request.go:629] Waited for 132.4625ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500
	I0429 19:26:37.651990   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500
	I0429 19:26:37.651990   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.651990   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.651990   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.657091   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:37.853774   14108 request.go:629] Waited for 195.5556ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:37.853906   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:37.853906   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:37.853906   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:37.853906   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:37.863490   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:26:37.864990   14108 pod_ready.go:92] pod "kube-apiserver-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:37.864990   14108 pod_ready.go:81] duration metric: took 346.4633ms for pod "kube-apiserver-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:37.865059   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:38.057285   14108 request.go:629] Waited for 191.959ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m02
	I0429 19:26:38.057678   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m02
	I0429 19:26:38.057678   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:38.057678   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:38.057678   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:38.063079   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:38.261121   14108 request.go:629] Waited for 197.0278ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:38.261304   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:38.261418   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:38.261418   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:38.261418   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:38.268989   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:26:38.449708   14108 request.go:629] Waited for 77.4875ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m02
	I0429 19:26:38.449897   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m02
	I0429 19:26:38.449897   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:38.449897   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:38.449897   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:38.460288   14108 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 19:26:38.655184   14108 request.go:629] Waited for 193.5814ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:38.655184   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:38.655414   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:38.655414   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:38.655414   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:38.660756   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:38.662036   14108 pod_ready.go:92] pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:38.662036   14108 pod_ready.go:81] duration metric: took 796.9708ms for pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:38.662036   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:38.858545   14108 request.go:629] Waited for 196.3947ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500
	I0429 19:26:38.858788   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500
	I0429 19:26:38.858788   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:38.858788   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:38.858908   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:38.865933   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:26:39.061577   14108 request.go:629] Waited for 194.158ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:39.061852   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:39.061852   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:39.061852   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:39.061957   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:39.066570   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:39.068303   14108 pod_ready.go:92] pod "kube-controller-manager-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:39.068303   14108 pod_ready.go:81] duration metric: took 406.2641ms for pod "kube-controller-manager-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:39.068303   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:39.249180   14108 request.go:629] Waited for 180.8756ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m02
	I0429 19:26:39.249180   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m02
	I0429 19:26:39.249180   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:39.249180   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:39.249180   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:39.255859   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:39.450776   14108 request.go:629] Waited for 193.3527ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:39.451345   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:39.451345   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:39.451406   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:39.451406   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:39.456069   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:26:39.457572   14108 pod_ready.go:92] pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:39.457572   14108 pod_ready.go:81] duration metric: took 389.2653ms for pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:39.457572   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4l6c" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:39.657274   14108 request.go:629] Waited for 199.5567ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4l6c
	I0429 19:26:39.657517   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4l6c
	I0429 19:26:39.657517   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:39.657614   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:39.657614   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:39.664334   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:39.858096   14108 request.go:629] Waited for 192.9683ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:39.858380   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:39.858380   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:39.858380   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:39.858380   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:39.863981   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:39.865649   14108 pod_ready.go:92] pod "kube-proxy-k4l6c" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:39.865705   14108 pod_ready.go:81] duration metric: took 408.1299ms for pod "kube-proxy-k4l6c" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:39.865705   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tm7tv" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:40.061213   14108 request.go:629] Waited for 195.3467ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tm7tv
	I0429 19:26:40.061213   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tm7tv
	I0429 19:26:40.061213   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:40.061213   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:40.061213   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:40.067306   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:40.249480   14108 request.go:629] Waited for 180.7548ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:40.249480   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:40.249797   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:40.249797   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:40.249797   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:40.259251   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:26:40.260923   14108 pod_ready.go:92] pod "kube-proxy-tm7tv" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:40.260923   14108 pod_ready.go:81] duration metric: took 395.2154ms for pod "kube-proxy-tm7tv" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:40.261005   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:40.452449   14108 request.go:629] Waited for 191.154ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500
	I0429 19:26:40.452640   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500
	I0429 19:26:40.452640   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:40.452640   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:40.452640   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:40.464050   14108 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 19:26:40.658842   14108 request.go:629] Waited for 193.3797ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:40.658842   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:26:40.658842   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:40.658842   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:40.658842   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:40.664927   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:26:40.665543   14108 pod_ready.go:92] pod "kube-scheduler-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:40.665674   14108 pod_ready.go:81] duration metric: took 404.6656ms for pod "kube-scheduler-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:40.665674   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:40.849847   14108 request.go:629] Waited for 183.9296ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m02
	I0429 19:26:40.850120   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m02
	I0429 19:26:40.850120   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:40.850120   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:40.850179   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:40.855184   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:41.053307   14108 request.go:629] Waited for 195.9752ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:41.053648   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:26:41.053648   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.053712   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.053712   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.059143   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:26:41.060226   14108 pod_ready.go:92] pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:26:41.060226   14108 pod_ready.go:81] duration metric: took 394.5489ms for pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:26:41.060226   14108 pod_ready.go:38] duration metric: took 3.6067471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:26:41.060376   14108 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:26:41.074029   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:26:41.111099   14108 api_server.go:72] duration metric: took 16.2131597s to wait for apiserver process to appear ...
	I0429 19:26:41.111099   14108 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:26:41.111099   14108 api_server.go:253] Checking apiserver healthz at https://172.17.240.42:8443/healthz ...
	I0429 19:26:41.119115   14108 api_server.go:279] https://172.17.240.42:8443/healthz returned 200:
	ok
	I0429 19:26:41.119197   14108 round_trippers.go:463] GET https://172.17.240.42:8443/version
	I0429 19:26:41.119352   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.119352   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.119352   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.121164   14108 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 19:26:41.121536   14108 api_server.go:141] control plane version: v1.30.0
	I0429 19:26:41.121660   14108 api_server.go:131] duration metric: took 10.5607ms to wait for apiserver health ...
	I0429 19:26:41.121660   14108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:26:41.257729   14108 request.go:629] Waited for 135.9879ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:26:41.257729   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:26:41.257729   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.257729   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.257729   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.267602   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:26:41.273611   14108 system_pods.go:59] 17 kube-system pods found
	I0429 19:26:41.273611   14108 system_pods.go:61] "coredns-7db6d8ff4d-5jxcm" [37ba2046-4273-4570-87af-2cc6d03ca54a] Running
	I0429 19:26:41.273611   14108 system_pods.go:61] "coredns-7db6d8ff4d-n22jn" [053e60b3-41d0-4923-9655-02d7dacd691f] Running
	I0429 19:26:41.273611   14108 system_pods.go:61] "etcd-ha-513500" [63f6504e-f824-4c6d-afb9-92ed2f0457cd] Running
	I0429 19:26:41.273611   14108 system_pods.go:61] "etcd-ha-513500-m02" [2d63d157-843e-4750-b4b0-cfa577e7c8a1] Running
	I0429 19:26:41.273611   14108 system_pods.go:61] "kindnet-9w6qr" [eb7641e9-6df3-4b9f-b78c-e251de8ebf78] Running
	I0429 19:26:41.273611   14108 system_pods.go:61] "kindnet-kdpql" [da068cd7-8925-45ed-a5a4-ff2db9d08bd8] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-apiserver-ha-513500" [e7a880e7-5218-4bde-9d62-532836751bbe] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-apiserver-ha-513500-m02" [52c1e20c-27a1-47d2-8405-4537727dac35] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-controller-manager-ha-513500" [bcf915a3-542c-422a-815b-823254b624ff] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-controller-manager-ha-513500-m02" [bc495cfd-bf88-4ef8-b33c-d252f4d9a717] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-proxy-k4l6c" [2c1fff7e-2f97-497a-b6b6-0fcb6e2fcea6] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-proxy-tm7tv" [b4ba7f26-253c-4c1c-83f4-7251a2ad14d4] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-scheduler-ha-513500" [76e5a3e9-d895-406a-ad12-cbaa48b4c52d] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-scheduler-ha-513500-m02" [643c27a0-ca4d-499d-abd7-99aa504580cb] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-vip-ha-513500" [bf461c57-113c-4b7b-987e-04dcc8c13373] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "kube-vip-ha-513500-m02" [76f42a60-c769-42fe-ab90-963fe0ec3489] Running
	I0429 19:26:41.274630   14108 system_pods.go:61] "storage-provisioner" [6a5df654-f7da-40f4-a05f-acf47aa779a1] Running
	I0429 19:26:41.274630   14108 system_pods.go:74] duration metric: took 152.8889ms to wait for pod list to return data ...
	I0429 19:26:41.274630   14108 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:26:41.448475   14108 request.go:629] Waited for 173.8436ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:26:41.448475   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:26:41.448475   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.448475   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.448475   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.456499   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:26:41.456499   14108 default_sa.go:45] found service account: "default"
	I0429 19:26:41.456499   14108 default_sa.go:55] duration metric: took 181.8671ms for default service account to be created ...
	I0429 19:26:41.456499   14108 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:26:41.651815   14108 request.go:629] Waited for 195.3146ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:26:41.652470   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:26:41.652470   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.652519   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.652519   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.660525   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:26:41.667582   14108 system_pods.go:86] 17 kube-system pods found
	I0429 19:26:41.667582   14108 system_pods.go:89] "coredns-7db6d8ff4d-5jxcm" [37ba2046-4273-4570-87af-2cc6d03ca54a] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "coredns-7db6d8ff4d-n22jn" [053e60b3-41d0-4923-9655-02d7dacd691f] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "etcd-ha-513500" [63f6504e-f824-4c6d-afb9-92ed2f0457cd] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "etcd-ha-513500-m02" [2d63d157-843e-4750-b4b0-cfa577e7c8a1] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kindnet-9w6qr" [eb7641e9-6df3-4b9f-b78c-e251de8ebf78] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kindnet-kdpql" [da068cd7-8925-45ed-a5a4-ff2db9d08bd8] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-apiserver-ha-513500" [e7a880e7-5218-4bde-9d62-532836751bbe] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-apiserver-ha-513500-m02" [52c1e20c-27a1-47d2-8405-4537727dac35] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-controller-manager-ha-513500" [bcf915a3-542c-422a-815b-823254b624ff] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-controller-manager-ha-513500-m02" [bc495cfd-bf88-4ef8-b33c-d252f4d9a717] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-proxy-k4l6c" [2c1fff7e-2f97-497a-b6b6-0fcb6e2fcea6] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-proxy-tm7tv" [b4ba7f26-253c-4c1c-83f4-7251a2ad14d4] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-scheduler-ha-513500" [76e5a3e9-d895-406a-ad12-cbaa48b4c52d] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-scheduler-ha-513500-m02" [643c27a0-ca4d-499d-abd7-99aa504580cb] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-vip-ha-513500" [bf461c57-113c-4b7b-987e-04dcc8c13373] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "kube-vip-ha-513500-m02" [76f42a60-c769-42fe-ab90-963fe0ec3489] Running
	I0429 19:26:41.667582   14108 system_pods.go:89] "storage-provisioner" [6a5df654-f7da-40f4-a05f-acf47aa779a1] Running
	I0429 19:26:41.667582   14108 system_pods.go:126] duration metric: took 211.0813ms to wait for k8s-apps to be running ...
	I0429 19:26:41.667582   14108 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:26:41.681069   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:26:41.719796   14108 system_svc.go:56] duration metric: took 52.2138ms WaitForService to wait for kubelet
	I0429 19:26:41.719849   14108 kubeadm.go:576] duration metric: took 16.8219042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:26:41.719896   14108 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:26:41.855108   14108 request.go:629] Waited for 135.1454ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes
	I0429 19:26:41.855297   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes
	I0429 19:26:41.855297   14108 round_trippers.go:469] Request Headers:
	I0429 19:26:41.855297   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:26:41.855297   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:26:41.864007   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:26:41.864580   14108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:26:41.864580   14108 node_conditions.go:123] node cpu capacity is 2
	I0429 19:26:41.864580   14108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:26:41.864580   14108 node_conditions.go:123] node cpu capacity is 2
	I0429 19:26:41.864580   14108 node_conditions.go:105] duration metric: took 144.6836ms to run NodePressure ...
	I0429 19:26:41.864580   14108 start.go:240] waiting for startup goroutines ...
	I0429 19:26:41.865109   14108 start.go:254] writing updated cluster config ...
	I0429 19:26:41.870020   14108 out.go:177] 
	I0429 19:26:41.883723   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:26:41.883723   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:26:41.890659   14108 out.go:177] * Starting "ha-513500-m03" control-plane node in "ha-513500" cluster
	I0429 19:26:41.893914   14108 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 19:26:41.893914   14108 cache.go:56] Caching tarball of preloaded images
	I0429 19:26:41.894652   14108 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 19:26:41.894652   14108 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 19:26:41.894652   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:26:41.899332   14108 start.go:360] acquireMachinesLock for ha-513500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:26:41.899332   14108 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-513500-m03"
	I0429 19:26:41.899332   14108 start.go:93] Provisioning new machine with config: &{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.247.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:26:41.900343   14108 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0429 19:26:41.903345   14108 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 19:26:41.904542   14108 start.go:159] libmachine.API.Create for "ha-513500" (driver="hyperv")
	I0429 19:26:41.904602   14108 client.go:168] LocalClient.Create starting
	I0429 19:26:41.904884   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 19:26:41.904884   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:26:41.904884   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:26:41.905478   14108 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 19:26:41.905478   14108 main.go:141] libmachine: Decoding PEM data...
	I0429 19:26:41.905478   14108 main.go:141] libmachine: Parsing certificate...
	I0429 19:26:41.905478   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 19:26:43.905198   14108 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 19:26:43.905275   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:43.905275   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 19:26:45.718186   14108 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 19:26:45.718186   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:45.718186   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:26:47.302616   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:26:47.302616   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:47.302616   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:26:51.058527   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:26:51.058527   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:51.061089   14108 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:26:51.579004   14108 main.go:141] libmachine: Creating SSH key...
	I0429 19:26:51.756997   14108 main.go:141] libmachine: Creating VM...
	I0429 19:26:51.756997   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 19:26:54.790733   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 19:26:54.790733   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:54.791618   14108 main.go:141] libmachine: Using switch "Default Switch"
	I0429 19:26:54.791618   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 19:26:56.642998   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 19:26:56.643325   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:26:56.643325   14108 main.go:141] libmachine: Creating VHD
	I0429 19:26:56.643325   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 19:27:00.410847   14108 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 65B5D04D-688D-4E5B-904B-7E141F51FF8F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 19:27:00.410847   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:00.410847   14108 main.go:141] libmachine: Writing magic tar header
	I0429 19:27:00.410847   14108 main.go:141] libmachine: Writing SSH key tar header
	I0429 19:27:00.421665   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 19:27:03.631607   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:03.631607   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:03.631884   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\disk.vhd' -SizeBytes 20000MB
	I0429 19:27:06.201398   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:06.202341   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:06.202457   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-513500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 19:27:09.990901   14108 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-513500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 19:27:09.990998   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:09.990998   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-513500-m03 -DynamicMemoryEnabled $false
	I0429 19:27:12.223991   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:12.224444   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:12.224444   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-513500-m03 -Count 2
	I0429 19:27:14.412192   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:14.412192   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:14.412192   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-513500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\boot2docker.iso'
	I0429 19:27:17.051395   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:17.051395   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:17.051395   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-513500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\disk.vhd'
	I0429 19:27:19.772572   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:19.772572   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:19.772572   14108 main.go:141] libmachine: Starting VM...
	I0429 19:27:19.773468   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-513500-m03
	I0429 19:27:22.989368   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:22.989368   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:22.989368   14108 main.go:141] libmachine: Waiting for host to start...
	I0429 19:27:22.989368   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:25.360543   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:25.360543   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:25.360543   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:27.937259   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:27.937585   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:28.944386   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:31.209567   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:31.209567   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:31.209567   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:33.868379   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:33.868379   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:34.876137   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:37.088703   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:37.088915   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:37.088915   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:39.681856   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:39.682577   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:40.689299   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:42.895504   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:42.895504   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:42.896539   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:45.447085   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 19:27:45.447623   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:46.451433   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:48.687031   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:48.688025   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:48.688077   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:51.375019   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:27:51.375019   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:51.375665   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:53.574792   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:53.574792   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:53.574792   14108 machine.go:94] provisionDockerMachine start ...
	I0429 19:27:53.574792   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:27:55.756313   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:27:55.756385   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:55.756385   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:27:58.393798   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:27:58.393798   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:27:58.404436   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:27:58.416373   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:27:58.416621   14108 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:27:58.553426   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 19:27:58.553426   14108 buildroot.go:166] provisioning hostname "ha-513500-m03"
	I0429 19:27:58.553594   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:00.720275   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:00.720275   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:00.720555   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:03.382865   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:03.382923   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:03.389210   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:03.389959   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:03.389959   14108 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-513500-m03 && echo "ha-513500-m03" | sudo tee /etc/hostname
	I0429 19:28:03.560439   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-513500-m03
	
	I0429 19:28:03.560439   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:05.733256   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:05.733786   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:05.733786   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:08.407301   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:08.407611   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:08.414362   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:08.414504   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:08.414504   14108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:28:08.571230   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:28:08.571230   14108 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 19:28:08.571230   14108 buildroot.go:174] setting up certificates
	I0429 19:28:08.571230   14108 provision.go:84] configureAuth start
	I0429 19:28:08.571230   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:10.741881   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:10.741881   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:10.741881   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:13.368222   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:13.368222   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:13.368789   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:15.515159   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:15.515407   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:15.515407   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:18.146798   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:18.147733   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:18.147808   14108 provision.go:143] copyHostCerts
	I0429 19:28:18.147918   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 19:28:18.148374   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 19:28:18.148374   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 19:28:18.148855   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 19:28:18.150108   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 19:28:18.150389   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 19:28:18.150389   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 19:28:18.150722   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 19:28:18.151800   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 19:28:18.152041   14108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 19:28:18.152041   14108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 19:28:18.152438   14108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 19:28:18.153595   14108 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-513500-m03 san=[127.0.0.1 172.17.246.101 ha-513500-m03 localhost minikube]
	I0429 19:28:18.526406   14108 provision.go:177] copyRemoteCerts
	I0429 19:28:18.539553   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:28:18.539553   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:20.713666   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:20.714647   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:20.714759   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:23.345679   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:23.345679   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:23.346755   14108 sshutil.go:53] new ssh client: &{IP:172.17.246.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\id_rsa Username:docker}
	I0429 19:28:23.458156   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9185641s)
	I0429 19:28:23.458325   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 19:28:23.458791   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 19:28:23.510697   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 19:28:23.510697   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 19:28:23.561556   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 19:28:23.561556   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:28:23.610643   14108 provision.go:87] duration metric: took 15.039294s to configureAuth
	I0429 19:28:23.610643   14108 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:28:23.611636   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:28:23.611636   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:25.734124   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:25.734885   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:25.734885   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:28.356029   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:28.356029   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:28.362740   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:28.363240   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:28.363240   14108 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 19:28:28.495663   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 19:28:28.495735   14108 buildroot.go:70] root file system type: tmpfs
	I0429 19:28:28.495948   14108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 19:28:28.495948   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:30.648272   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:30.649140   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:30.649140   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:33.261658   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:33.262017   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:33.270622   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:33.271358   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:33.271358   14108 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.240.42"
	Environment="NO_PROXY=172.17.240.42,172.17.247.146"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 19:28:33.441690   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.240.42
	Environment=NO_PROXY=172.17.240.42,172.17.247.146
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 19:28:33.441942   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:35.626105   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:35.626105   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:35.626105   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:38.254611   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:38.255222   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:38.261588   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:38.262134   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:38.262390   14108 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 19:28:40.501151   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 19:28:40.501228   14108 machine.go:97] duration metric: took 46.9260654s to provisionDockerMachine
	I0429 19:28:40.501228   14108 client.go:171] duration metric: took 1m58.5956941s to LocalClient.Create
	I0429 19:28:40.501228   14108 start.go:167] duration metric: took 1m58.5959633s to libmachine.API.Create "ha-513500"
	I0429 19:28:40.501228   14108 start.go:293] postStartSetup for "ha-513500-m03" (driver="hyperv")
	I0429 19:28:40.501228   14108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:28:40.515626   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:28:40.515626   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:42.655830   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:42.655830   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:42.655830   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:45.238660   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:45.239622   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:45.240190   14108 sshutil.go:53] new ssh client: &{IP:172.17.246.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\id_rsa Username:docker}
	I0429 19:28:45.350234   14108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8345198s)
	I0429 19:28:45.364856   14108 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:28:45.372269   14108 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:28:45.372269   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 19:28:45.372907   14108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 19:28:45.373879   14108 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 19:28:45.373879   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 19:28:45.387882   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:28:45.409649   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 19:28:45.468482   14108 start.go:296] duration metric: took 4.966989s for postStartSetup
	I0429 19:28:45.472723   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:47.629572   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:47.629572   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:47.629821   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:50.300143   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:50.300143   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:50.300438   14108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\config.json ...
	I0429 19:28:50.302686   14108 start.go:128] duration metric: took 2m8.4013339s to createHost
	I0429 19:28:50.302980   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:52.492330   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:52.492330   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:52.492865   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:28:55.101854   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:28:55.101854   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:55.111485   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:28:55.112230   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:28:55.112230   14108 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:28:55.239417   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418935.253259815
	
	I0429 19:28:55.239417   14108 fix.go:216] guest clock: 1714418935.253259815
	I0429 19:28:55.239417   14108 fix.go:229] Guest: 2024-04-29 19:28:55.253259815 +0000 UTC Remote: 2024-04-29 19:28:50.3029808 +0000 UTC m=+580.231871601 (delta=4.950279015s)
	I0429 19:28:55.239574   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:28:57.414202   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:28:57.415076   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:28:57.415139   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:00.053729   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:29:00.053954   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:00.060609   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 19:29:00.060799   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.246.101 22 <nil> <nil>}
	I0429 19:29:00.060799   14108 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714418935
	I0429 19:29:00.216429   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 19:28:55 UTC 2024
	
	I0429 19:29:00.216502   14108 fix.go:236] clock set: Mon Apr 29 19:28:55 UTC 2024
	 (err=<nil>)
	I0429 19:29:00.216502   14108 start.go:83] releasing machines lock for "ha-513500-m03", held for 2m18.316083s
	I0429 19:29:00.216864   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:29:02.387914   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:02.388665   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:02.388665   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:05.025197   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:29:05.025197   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:05.030939   14108 out.go:177] * Found network options:
	I0429 19:29:05.033920   14108 out.go:177]   - NO_PROXY=172.17.240.42,172.17.247.146
	W0429 19:29:05.035981   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:29:05.035981   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:29:05.039114   14108 out.go:177]   - NO_PROXY=172.17.240.42,172.17.247.146
	W0429 19:29:05.043378   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:29:05.043378   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:29:05.044574   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:29:05.044574   14108 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:29:05.047762   14108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:29:05.047762   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:29:05.060966   14108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 19:29:05.061156   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500-m03 ).state
	I0429 19:29:07.267963   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:07.267963   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:07.267963   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:07.271668   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:07.271668   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:07.271668   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:10.035824   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:29:10.035824   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:10.035936   14108 sshutil.go:53] new ssh client: &{IP:172.17.246.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\id_rsa Username:docker}
	I0429 19:29:10.064588   14108 main.go:141] libmachine: [stdout =====>] : 172.17.246.101
	
	I0429 19:29:10.064588   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:10.065386   14108 sshutil.go:53] new ssh client: &{IP:172.17.246.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500-m03\id_rsa Username:docker}
	I0429 19:29:10.141652   14108 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0805372s)
	W0429 19:29:10.141770   14108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:29:10.155424   14108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:29:10.344285   14108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:29:10.344428   14108 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2966249s)
	I0429 19:29:10.344428   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:29:10.344664   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:29:10.405048   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 19:29:10.444352   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 19:29:10.484000   14108 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 19:29:10.498919   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 19:29:10.543700   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:29:10.583833   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 19:29:10.626125   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 19:29:10.663682   14108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:29:10.701840   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 19:29:10.738655   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 19:29:10.777749   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 19:29:10.815018   14108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:29:10.852594   14108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:29:10.904663   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:29:11.137529   14108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 19:29:11.175286   14108 start.go:494] detecting cgroup driver to use...
	I0429 19:29:11.188862   14108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 19:29:11.232328   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:29:11.273413   14108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:29:11.325915   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:29:11.368347   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:29:11.412321   14108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 19:29:11.475213   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 19:29:11.509289   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:29:11.562719   14108 ssh_runner.go:195] Run: which cri-dockerd
	I0429 19:29:11.582998   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 19:29:11.607043   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 19:29:11.654995   14108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 19:29:11.878372   14108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 19:29:12.095804   14108 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 19:29:12.095943   14108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 19:29:12.148665   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:29:12.369350   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 19:29:14.935311   14108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5654558s)
	I0429 19:29:14.949001   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 19:29:14.990118   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:29:15.035287   14108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 19:29:15.265160   14108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 19:29:15.522681   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:29:15.777420   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 19:29:15.831031   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 19:29:15.875382   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:29:16.108088   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 19:29:16.230861   14108 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 19:29:16.243985   14108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 19:29:16.253737   14108 start.go:562] Will wait 60s for crictl version
	I0429 19:29:16.271546   14108 ssh_runner.go:195] Run: which crictl
	I0429 19:29:16.297067   14108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:29:16.358824   14108 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 19:29:16.370221   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:29:16.417463   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 19:29:16.454510   14108 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 19:29:16.457513   14108 out.go:177]   - env NO_PROXY=172.17.240.42
	I0429 19:29:16.460521   14108 out.go:177]   - env NO_PROXY=172.17.240.42,172.17.247.146
	I0429 19:29:16.462522   14108 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 19:29:16.466524   14108 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 19:29:16.466524   14108 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 19:29:16.466524   14108 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 19:29:16.466524   14108 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 19:29:16.469508   14108 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 19:29:16.469508   14108 ip.go:210] interface addr: 172.17.240.1/20
	I0429 19:29:16.481511   14108 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 19:29:16.488516   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:29:16.510549   14108 mustload.go:65] Loading cluster: ha-513500
	I0429 19:29:16.512250   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:29:16.512810   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:29:18.676610   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:18.676894   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:18.677008   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:29:18.677862   14108 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500 for IP: 172.17.246.101
	I0429 19:29:18.677862   14108 certs.go:194] generating shared ca certs ...
	I0429 19:29:18.677862   14108 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:29:18.678402   14108 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 19:29:18.678804   14108 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 19:29:18.678860   14108 certs.go:256] generating profile certs ...
	I0429 19:29:18.679506   14108 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\client.key
	I0429 19:29:18.679677   14108 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.3dac6f02
	I0429 19:29:18.679677   14108 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.3dac6f02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.240.42 172.17.247.146 172.17.246.101 172.17.255.254]
	I0429 19:29:19.188832   14108 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.3dac6f02 ...
	I0429 19:29:19.188832   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.3dac6f02: {Name:mka4aa4e7b09d84005f0f01ff2299a91be08baaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:29:19.190990   14108 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.3dac6f02 ...
	I0429 19:29:19.190990   14108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.3dac6f02: {Name:mk1974955d4f3ba88d7af5fedd95e2cb2387b0f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:29:19.190990   14108 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt.3dac6f02 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt
	I0429 19:29:19.203706   14108 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key.3dac6f02 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key
	I0429 19:29:19.204038   14108 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key
	I0429 19:29:19.204038   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:29:19.205109   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:29:19.205160   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:29:19.205456   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:29:19.205624   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:29:19.205804   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:29:19.206020   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:29:19.206509   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:29:19.207117   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 19:29:19.207468   14108 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 19:29:19.207726   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 19:29:19.208007   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 19:29:19.208279   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 19:29:19.208474   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 19:29:19.209007   14108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 19:29:19.209471   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:29:19.209640   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 19:29:19.210065   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 19:29:19.210270   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:29:21.402532   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:21.402775   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:21.402869   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:24.010577   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:29:24.010577   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:24.010577   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:29:24.114529   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 19:29:24.126107   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 19:29:24.165966   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 19:29:24.174800   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0429 19:29:24.211164   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 19:29:24.219415   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 19:29:24.257944   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 19:29:24.265603   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 19:29:24.298966   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 19:29:24.307110   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 19:29:24.340456   14108 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 19:29:24.349190   14108 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 19:29:24.374920   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:29:24.430360   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 19:29:24.481626   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:29:24.534280   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:29:24.588062   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0429 19:29:24.643850   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 19:29:24.693852   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:29:24.742831   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-513500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 19:29:24.791835   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:29:24.840237   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 19:29:24.889792   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 19:29:24.944499   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 19:29:24.980398   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0429 19:29:25.015400   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 19:29:25.049654   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 19:29:25.079788   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 19:29:25.129138   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 19:29:25.170628   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 19:29:25.221738   14108 ssh_runner.go:195] Run: openssl version
	I0429 19:29:25.243675   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 19:29:25.280878   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 19:29:25.288719   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 19:29:25.301984   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 19:29:25.325715   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:29:25.361498   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:29:25.399777   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:29:25.410381   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:29:25.423889   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:29:25.450461   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:29:25.486853   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 19:29:25.523544   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 19:29:25.531455   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 19:29:25.545658   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 19:29:25.569870   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 19:29:25.606592   14108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:29:25.613588   14108 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:29:25.614123   14108 kubeadm.go:928] updating node {m03 172.17.246.101 8443 v1.30.0 docker true true} ...
	I0429 19:29:25.614394   14108 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-513500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.246.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:29:25.614464   14108 kube-vip.go:115] generating kube-vip config ...
	I0429 19:29:25.627663   14108 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:29:25.657517   14108 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:29:25.657602   14108 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:29:25.671285   14108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:29:25.689428   14108 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 19:29:25.702607   14108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 19:29:25.722166   14108 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 19:29:25.722780   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:29:25.722780   14108 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 19:29:25.722166   14108 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 19:29:25.722904   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:29:25.738055   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:29:25.739049   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:29:25.741358   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:29:25.745571   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 19:29:25.745571   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 19:29:25.748317   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 19:29:25.748317   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 19:29:25.787618   14108 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:29:25.801527   14108 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:29:25.937194   14108 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 19:29:25.937255   14108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 19:29:27.082928   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 19:29:27.107187   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0429 19:29:27.147218   14108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:29:27.183592   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0429 19:29:27.239763   14108 ssh_runner.go:195] Run: grep 172.17.255.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:29:27.247517   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:29:27.289581   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:29:27.534325   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:29:27.570536   14108 host.go:66] Checking if "ha-513500" exists ...
	I0429 19:29:27.571366   14108 start.go:316] joinCluster: &{Name:ha-513500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-513500 Namespace:default APIServerHAVIP:172.17.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.240.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.247.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.246.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:29:27.571699   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 19:29:27.571755   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-513500 ).state
	I0429 19:29:29.752091   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 19:29:29.752871   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:29.752871   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-513500 ).networkadapters[0]).ipaddresses[0]
	I0429 19:29:32.436091   14108 main.go:141] libmachine: [stdout =====>] : 172.17.240.42
	
	I0429 19:29:32.436091   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 19:29:32.436905   14108 sshutil.go:53] new ssh client: &{IP:172.17.240.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-513500\id_rsa Username:docker}
	I0429 19:29:32.656075   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0843358s)
	I0429 19:29:32.656075   14108 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.246.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:29:32.656075   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5yaa43.ds6bmbjti0klyjf6 --discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-513500-m03 --control-plane --apiserver-advertise-address=172.17.246.101 --apiserver-bind-port=8443"
	I0429 19:30:18.713293   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5yaa43.ds6bmbjti0klyjf6 --discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-513500-m03 --control-plane --apiserver-advertise-address=172.17.246.101 --apiserver-bind-port=8443": (46.0568563s)
	I0429 19:30:18.713293   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 19:30:19.956534   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.2432312s)
	I0429 19:30:19.977503   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-513500-m03 minikube.k8s.io/updated_at=2024_04_29T19_30_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-513500 minikube.k8s.io/primary=false
	I0429 19:30:20.182454   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-513500-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 19:30:20.356810   14108 start.go:318] duration metric: took 52.7849751s to joinCluster
	I0429 19:30:20.356990   14108 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.17.246.101 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 19:30:20.357787   14108 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:30:20.360204   14108 out.go:177] * Verifying Kubernetes components...
	I0429 19:30:20.376812   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:30:20.854955   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:30:20.888703   14108 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:30:20.889465   14108 kapi.go:59] client config for ha-513500: &rest.Config{Host:"https://172.17.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-513500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 19:30:20.889637   14108 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.255.254:8443 with https://172.17.240.42:8443
	I0429 19:30:20.890559   14108 node_ready.go:35] waiting up to 6m0s for node "ha-513500-m03" to be "Ready" ...
	I0429 19:30:20.890696   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:20.890780   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:20.890780   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:20.890780   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:20.907725   14108 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 19:30:21.398190   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:21.398190   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:21.398190   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:21.398190   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:21.404190   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:21.905372   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:21.905629   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:21.905629   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:21.905629   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:21.909907   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:22.396254   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:22.396254   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:22.396254   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:22.396492   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:22.401542   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:22.891138   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:22.891138   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:22.891138   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:22.891138   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:22.895653   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:22.895653   14108 node_ready.go:53] node "ha-513500-m03" has status "Ready":"False"
	I0429 19:30:23.398162   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:23.398281   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:23.398281   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:23.398281   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:23.408428   14108 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 19:30:23.905530   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:23.905767   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:23.905767   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:23.905767   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:23.933106   14108 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0429 19:30:24.395628   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:24.395628   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:24.395628   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:24.395628   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:24.402615   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:24.897788   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:24.897788   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:24.897861   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:24.897861   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:24.902456   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:24.904110   14108 node_ready.go:53] node "ha-513500-m03" has status "Ready":"False"
	I0429 19:30:25.402078   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:25.402449   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:25.402449   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:25.402449   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:25.407728   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:25.906207   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:25.906311   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:25.906311   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:25.906311   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:25.911706   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:26.391822   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:26.392131   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.392131   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.392131   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.397215   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:26.399353   14108 node_ready.go:49] node "ha-513500-m03" has status "Ready":"True"
	I0429 19:30:26.399353   14108 node_ready.go:38] duration metric: took 5.5087511s for node "ha-513500-m03" to be "Ready" ...
	I0429 19:30:26.399445   14108 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:30:26.399542   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:30:26.399615   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.399615   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.399615   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.413864   14108 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 19:30:26.429152   14108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.429152   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5jxcm
	I0429 19:30:26.429152   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.429152   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.429152   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.433833   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:26.434695   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:26.434695   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.434809   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.434809   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.443755   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:30:26.444917   14108 pod_ready.go:92] pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:26.444917   14108 pod_ready.go:81] duration metric: took 15.7653ms for pod "coredns-7db6d8ff4d-5jxcm" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.445066   14108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.445133   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-n22jn
	I0429 19:30:26.445133   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.445133   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.445133   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.451205   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:26.451205   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:26.452203   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.452322   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.452322   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.457633   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:26.458361   14108 pod_ready.go:92] pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:26.458483   14108 pod_ready.go:81] duration metric: took 13.4167ms for pod "coredns-7db6d8ff4d-n22jn" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.458527   14108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.458626   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500
	I0429 19:30:26.458626   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.458626   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.458626   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.462868   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:26.463913   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:26.463913   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.463913   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.463913   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.467879   14108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:30:26.469240   14108 pod_ready.go:92] pod "etcd-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:26.469240   14108 pod_ready.go:81] duration metric: took 10.7127ms for pod "etcd-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.469240   14108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.469240   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m02
	I0429 19:30:26.469240   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.469240   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.469240   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.474875   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:26.475828   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:26.475828   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.475895   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.475895   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.481344   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:26.483364   14108 pod_ready.go:92] pod "etcd-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:26.483364   14108 pod_ready.go:81] duration metric: took 14.124ms for pod "etcd-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.483364   14108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:26.593659   14108 request.go:629] Waited for 109.755ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:26.593748   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:26.593748   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.593816   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.593816   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.600041   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:26.797194   14108 request.go:629] Waited for 196.2696ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:26.797296   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:26.797296   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:26.797296   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:26.797296   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:26.802252   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:27.001253   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:27.001476   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.001476   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.001476   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.010923   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:30:27.203107   14108 request.go:629] Waited for 191.4334ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:27.203247   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:27.203247   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.203247   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.203247   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.208920   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:27.486744   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:27.486833   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.486833   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.486833   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.493117   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:27.596392   14108 request.go:629] Waited for 101.43ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:27.596621   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:27.596621   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.596621   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.596717   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.601970   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:27.987946   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:27.987946   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.987946   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.987946   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.993797   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:27.994799   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:27.994799   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:27.994861   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:27.994861   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:27.999816   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:28.489251   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:28.489251   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:28.489346   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:28.489346   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:28.498665   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:30:28.499581   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:28.499581   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:28.499581   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:28.499649   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:28.504536   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:28.505406   14108 pod_ready.go:102] pod "etcd-ha-513500-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 19:30:28.991355   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:28.991622   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:28.991622   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:28.991622   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:28.997073   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:28.999298   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:28.999298   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:28.999298   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:28.999298   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:29.004222   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:29.489359   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:29.489793   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:29.489793   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:29.489793   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:29.497456   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:30:29.498751   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:29.498751   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:29.498751   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:29.498751   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:29.503128   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:29.993177   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:29.993421   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:29.993421   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:29.993421   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:29.999450   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:30.000305   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:30.000305   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:30.000305   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:30.000305   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:30.004511   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:30.498031   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:30.498152   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:30.498152   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:30.498152   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:30.505061   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:30.505796   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:30.505796   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:30.505796   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:30.505796   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:30.510247   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:30.512029   14108 pod_ready.go:102] pod "etcd-ha-513500-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 19:30:30.984865   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:30.985049   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:30.985049   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:30.985194   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:30.989983   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:30.991721   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:30.991721   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:30.991721   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:30.991721   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:30.996048   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:31.498504   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:31.498504   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:31.498504   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:31.498504   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:31.507146   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:30:31.509486   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:31.509571   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:31.509571   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:31.509571   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:31.515094   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:31.989369   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-513500-m03
	I0429 19:30:31.989435   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:31.989435   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:31.989435   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:31.993295   14108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:30:31.994850   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:31.994850   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:31.994850   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:31.994850   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:31.999453   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.000849   14108 pod_ready.go:92] pod "etcd-ha-513500-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:32.000849   14108 pod_ready.go:81] duration metric: took 5.5174417s for pod "etcd-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.000849   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.000849   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500
	I0429 19:30:32.000849   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.000849   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.000849   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.006129   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.007185   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:32.007185   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.007185   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.007185   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.016442   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:30:32.016919   14108 pod_ready.go:92] pod "kube-apiserver-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:32.016919   14108 pod_ready.go:81] duration metric: took 16.0696ms for pod "kube-apiserver-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.016919   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.016919   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m02
	I0429 19:30:32.016919   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.016919   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.016919   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.021065   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.021798   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:32.021798   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.021947   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.021947   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.026842   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.027791   14108 pod_ready.go:92] pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:32.027847   14108 pod_ready.go:81] duration metric: took 10.9277ms for pod "kube-apiserver-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.027847   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.193363   14108 request.go:629] Waited for 165.1984ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m03
	I0429 19:30:32.193688   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-513500-m03
	I0429 19:30:32.193688   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.193688   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.193688   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.198330   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.398365   14108 request.go:629] Waited for 198.6615ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:32.398780   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:32.398780   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.398780   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.398878   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.404851   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:32.406204   14108 pod_ready.go:92] pod "kube-apiserver-ha-513500-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:32.406257   14108 pod_ready.go:81] duration metric: took 378.3546ms for pod "kube-apiserver-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.406257   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.604410   14108 request.go:629] Waited for 197.9082ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500
	I0429 19:30:32.604706   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500
	I0429 19:30:32.604706   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.604706   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.604706   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.610126   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.792486   14108 request.go:629] Waited for 180.2535ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:32.792624   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:32.792624   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.792749   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.792868   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:32.797406   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:32.799059   14108 pod_ready.go:92] pod "kube-controller-manager-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:32.799149   14108 pod_ready.go:81] duration metric: took 392.889ms for pod "kube-controller-manager-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.799149   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:32.995109   14108 request.go:629] Waited for 195.9592ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m02
	I0429 19:30:32.995407   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m02
	I0429 19:30:32.995407   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:32.995407   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:32.995407   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:33.000461   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:33.197733   14108 request.go:629] Waited for 195.8575ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:33.197733   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:33.197733   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:33.197733   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:33.197733   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:33.204640   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:33.208432   14108 pod_ready.go:92] pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:33.208491   14108 pod_ready.go:81] duration metric: took 409.28ms for pod "kube-controller-manager-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:33.208491   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:33.392264   14108 request.go:629] Waited for 183.5379ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m03
	I0429 19:30:33.392336   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-513500-m03
	I0429 19:30:33.392532   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:33.392532   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:33.392532   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:33.400076   14108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:30:33.604918   14108 request.go:629] Waited for 202.2926ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:33.605191   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:33.605191   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:33.605191   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:33.605191   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:33.612448   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:33.613233   14108 pod_ready.go:92] pod "kube-controller-manager-ha-513500-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:33.613335   14108 pod_ready.go:81] duration metric: took 404.8413ms for pod "kube-controller-manager-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:33.613335   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4l6c" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:33.792898   14108 request.go:629] Waited for 179.4829ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4l6c
	I0429 19:30:33.793211   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4l6c
	I0429 19:30:33.793211   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:33.793211   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:33.793211   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:33.798837   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:33.996298   14108 request.go:629] Waited for 196.1575ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:33.996298   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:33.996298   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:33.996298   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:33.996298   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:34.001661   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:34.003249   14108 pod_ready.go:92] pod "kube-proxy-k4l6c" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:34.003249   14108 pod_ready.go:81] duration metric: took 389.9106ms for pod "kube-proxy-k4l6c" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:34.003370   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s7ddt" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:34.202031   14108 request.go:629] Waited for 198.5016ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s7ddt
	I0429 19:30:34.202147   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s7ddt
	I0429 19:30:34.202147   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:34.202147   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:34.202147   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:34.207864   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:34.392151   14108 request.go:629] Waited for 182.9204ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:34.392421   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:34.392421   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:34.392421   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:34.392421   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:34.398839   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:34.399705   14108 pod_ready.go:92] pod "kube-proxy-s7ddt" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:34.399705   14108 pod_ready.go:81] duration metric: took 396.332ms for pod "kube-proxy-s7ddt" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:34.399705   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tm7tv" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:34.598267   14108 request.go:629] Waited for 198.3986ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tm7tv
	I0429 19:30:34.598566   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tm7tv
	I0429 19:30:34.598566   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:34.598566   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:34.598718   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:34.604317   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:34.801963   14108 request.go:629] Waited for 195.7383ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:34.802357   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:34.802357   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:34.802357   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:34.802666   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:34.808483   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:34.809338   14108 pod_ready.go:92] pod "kube-proxy-tm7tv" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:34.809338   14108 pod_ready.go:81] duration metric: took 409.6304ms for pod "kube-proxy-tm7tv" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:34.809338   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.004342   14108 request.go:629] Waited for 194.9182ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500
	I0429 19:30:35.004458   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500
	I0429 19:30:35.004458   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.004458   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.004458   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.010914   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:35.192979   14108 request.go:629] Waited for 182.0641ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:35.193187   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500
	I0429 19:30:35.193187   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.193187   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.193187   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.198295   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:35.198976   14108 pod_ready.go:92] pod "kube-scheduler-ha-513500" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:35.199048   14108 pod_ready.go:81] duration metric: took 389.7063ms for pod "kube-scheduler-ha-513500" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.199048   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.396708   14108 request.go:629] Waited for 197.4972ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m02
	I0429 19:30:35.397144   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m02
	I0429 19:30:35.397144   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.397144   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.397231   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.402633   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:35.598540   14108 request.go:629] Waited for 194.6929ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:35.598540   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m02
	I0429 19:30:35.598540   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.598540   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.598540   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.607719   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:30:35.608401   14108 pod_ready.go:92] pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:35.608807   14108 pod_ready.go:81] duration metric: took 409.756ms for pod "kube-scheduler-ha-513500-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.608889   14108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.801637   14108 request.go:629] Waited for 192.746ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m03
	I0429 19:30:35.801637   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-513500-m03
	I0429 19:30:35.801901   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.801966   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.801990   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.807380   14108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:30:35.992234   14108 request.go:629] Waited for 183.3531ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:35.992381   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes/ha-513500-m03
	I0429 19:30:35.992381   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:35.992381   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:35.992440   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:35.997976   14108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:30:35.998678   14108 pod_ready.go:92] pod "kube-scheduler-ha-513500-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:30:35.998678   14108 pod_ready.go:81] duration metric: took 389.7853ms for pod "kube-scheduler-ha-513500-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:30:35.998740   14108 pod_ready.go:38] duration metric: took 9.5992202s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:30:35.998740   14108 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:30:36.013099   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:30:36.040683   14108 api_server.go:72] duration metric: took 15.6835365s to wait for apiserver process to appear ...
	I0429 19:30:36.040683   14108 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:30:36.040683   14108 api_server.go:253] Checking apiserver healthz at https://172.17.240.42:8443/healthz ...
	I0429 19:30:36.048954   14108 api_server.go:279] https://172.17.240.42:8443/healthz returned 200:
	ok
	I0429 19:30:36.049597   14108 round_trippers.go:463] GET https://172.17.240.42:8443/version
	I0429 19:30:36.049597   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:36.049597   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:36.049597   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:36.050910   14108 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 19:30:36.053149   14108 api_server.go:141] control plane version: v1.30.0
	I0429 19:30:36.053149   14108 api_server.go:131] duration metric: took 12.466ms to wait for apiserver health ...
	I0429 19:30:36.053149   14108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:30:36.193835   14108 request.go:629] Waited for 140.6009ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:30:36.194200   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:30:36.194422   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:36.194422   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:36.194422   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:36.204325   14108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 19:30:36.215639   14108 system_pods.go:59] 24 kube-system pods found
	I0429 19:30:36.215639   14108 system_pods.go:61] "coredns-7db6d8ff4d-5jxcm" [37ba2046-4273-4570-87af-2cc6d03ca54a] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "coredns-7db6d8ff4d-n22jn" [053e60b3-41d0-4923-9655-02d7dacd691f] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "etcd-ha-513500" [63f6504e-f824-4c6d-afb9-92ed2f0457cd] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "etcd-ha-513500-m02" [2d63d157-843e-4750-b4b0-cfa577e7c8a1] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "etcd-ha-513500-m03" [5d7cba98-84b0-4b25-bbdb-189bf3a926db] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kindnet-9tv8w" [28dad06a-bed9-4b9c-a3b6-df814e1f3d7b] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kindnet-9w6qr" [eb7641e9-6df3-4b9f-b78c-e251de8ebf78] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kindnet-kdpql" [da068cd7-8925-45ed-a5a4-ff2db9d08bd8] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-apiserver-ha-513500" [e7a880e7-5218-4bde-9d62-532836751bbe] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-apiserver-ha-513500-m02" [52c1e20c-27a1-47d2-8405-4537727dac35] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-apiserver-ha-513500-m03" [7780dbcd-ed6c-4283-b93f-c725a0a78994] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-controller-manager-ha-513500" [bcf915a3-542c-422a-815b-823254b624ff] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-controller-manager-ha-513500-m02" [bc495cfd-bf88-4ef8-b33c-d252f4d9a717] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-controller-manager-ha-513500-m03" [a4507291-9f79-4ad9-8331-22ae19067d63] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-proxy-k4l6c" [2c1fff7e-2f97-497a-b6b6-0fcb6e2fcea6] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-proxy-s7ddt" [46edafa6-bc34-47d0-b33e-881bb23d4262] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-proxy-tm7tv" [b4ba7f26-253c-4c1c-83f4-7251a2ad14d4] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-scheduler-ha-513500" [76e5a3e9-d895-406a-ad12-cbaa48b4c52d] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-scheduler-ha-513500-m02" [643c27a0-ca4d-499d-abd7-99aa504580cb] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-scheduler-ha-513500-m03" [d319fcbd-9d28-4fca-b9c8-6a7c64c129c9] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-vip-ha-513500" [bf461c57-113c-4b7b-987e-04dcc8c13373] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-vip-ha-513500-m02" [76f42a60-c769-42fe-ab90-963fe0ec3489] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "kube-vip-ha-513500-m03" [e1568d39-7863-4071-b1b5-66713276b66b] Running
	I0429 19:30:36.215639   14108 system_pods.go:61] "storage-provisioner" [6a5df654-f7da-40f4-a05f-acf47aa779a1] Running
	I0429 19:30:36.215639   14108 system_pods.go:74] duration metric: took 162.4886ms to wait for pod list to return data ...
	I0429 19:30:36.215639   14108 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:30:36.398702   14108 request.go:629] Waited for 182.1485ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:30:36.399048   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:30:36.399048   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:36.399048   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:36.399048   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:36.407683   14108 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 19:30:36.407683   14108 default_sa.go:45] found service account: "default"
	I0429 19:30:36.407683   14108 default_sa.go:55] duration metric: took 191.5138ms for default service account to be created ...
	I0429 19:30:36.407683   14108 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:30:36.602610   14108 request.go:629] Waited for 194.7985ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:30:36.602743   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/namespaces/kube-system/pods
	I0429 19:30:36.602743   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:36.602743   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:36.602799   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:36.617222   14108 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 19:30:36.627728   14108 system_pods.go:86] 24 kube-system pods found
	I0429 19:30:36.627728   14108 system_pods.go:89] "coredns-7db6d8ff4d-5jxcm" [37ba2046-4273-4570-87af-2cc6d03ca54a] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "coredns-7db6d8ff4d-n22jn" [053e60b3-41d0-4923-9655-02d7dacd691f] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "etcd-ha-513500" [63f6504e-f824-4c6d-afb9-92ed2f0457cd] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "etcd-ha-513500-m02" [2d63d157-843e-4750-b4b0-cfa577e7c8a1] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "etcd-ha-513500-m03" [5d7cba98-84b0-4b25-bbdb-189bf3a926db] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kindnet-9tv8w" [28dad06a-bed9-4b9c-a3b6-df814e1f3d7b] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kindnet-9w6qr" [eb7641e9-6df3-4b9f-b78c-e251de8ebf78] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kindnet-kdpql" [da068cd7-8925-45ed-a5a4-ff2db9d08bd8] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kube-apiserver-ha-513500" [e7a880e7-5218-4bde-9d62-532836751bbe] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kube-apiserver-ha-513500-m02" [52c1e20c-27a1-47d2-8405-4537727dac35] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kube-apiserver-ha-513500-m03" [7780dbcd-ed6c-4283-b93f-c725a0a78994] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kube-controller-manager-ha-513500" [bcf915a3-542c-422a-815b-823254b624ff] Running
	I0429 19:30:36.627728   14108 system_pods.go:89] "kube-controller-manager-ha-513500-m02" [bc495cfd-bf88-4ef8-b33c-d252f4d9a717] Running
	I0429 19:30:36.628428   14108 system_pods.go:89] "kube-controller-manager-ha-513500-m03" [a4507291-9f79-4ad9-8331-22ae19067d63] Running
	I0429 19:30:36.628428   14108 system_pods.go:89] "kube-proxy-k4l6c" [2c1fff7e-2f97-497a-b6b6-0fcb6e2fcea6] Running
	I0429 19:30:36.628428   14108 system_pods.go:89] "kube-proxy-s7ddt" [46edafa6-bc34-47d0-b33e-881bb23d4262] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-proxy-tm7tv" [b4ba7f26-253c-4c1c-83f4-7251a2ad14d4] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-scheduler-ha-513500" [76e5a3e9-d895-406a-ad12-cbaa48b4c52d] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-scheduler-ha-513500-m02" [643c27a0-ca4d-499d-abd7-99aa504580cb] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-scheduler-ha-513500-m03" [d319fcbd-9d28-4fca-b9c8-6a7c64c129c9] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-vip-ha-513500" [bf461c57-113c-4b7b-987e-04dcc8c13373] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-vip-ha-513500-m02" [76f42a60-c769-42fe-ab90-963fe0ec3489] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "kube-vip-ha-513500-m03" [e1568d39-7863-4071-b1b5-66713276b66b] Running
	I0429 19:30:36.628525   14108 system_pods.go:89] "storage-provisioner" [6a5df654-f7da-40f4-a05f-acf47aa779a1] Running
	I0429 19:30:36.628525   14108 system_pods.go:126] duration metric: took 220.8407ms to wait for k8s-apps to be running ...
	I0429 19:30:36.628525   14108 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:30:36.640520   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:30:36.670181   14108 system_svc.go:56] duration metric: took 41.6562ms WaitForService to wait for kubelet
	I0429 19:30:36.670181   14108 kubeadm.go:576] duration metric: took 16.3130297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:30:36.670181   14108 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:30:36.805333   14108 request.go:629] Waited for 135.0648ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.240.42:8443/api/v1/nodes
	I0429 19:30:36.805532   14108 round_trippers.go:463] GET https://172.17.240.42:8443/api/v1/nodes
	I0429 19:30:36.805596   14108 round_trippers.go:469] Request Headers:
	I0429 19:30:36.805596   14108 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:30:36.805596   14108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 19:30:36.812355   14108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:30:36.813507   14108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:30:36.813507   14108 node_conditions.go:123] node cpu capacity is 2
	I0429 19:30:36.813507   14108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:30:36.813507   14108 node_conditions.go:123] node cpu capacity is 2
	I0429 19:30:36.813507   14108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:30:36.813507   14108 node_conditions.go:123] node cpu capacity is 2
	I0429 19:30:36.813507   14108 node_conditions.go:105] duration metric: took 143.3249ms to run NodePressure ...
	I0429 19:30:36.813507   14108 start.go:240] waiting for startup goroutines ...
	I0429 19:30:36.813507   14108 start.go:254] writing updated cluster config ...
	I0429 19:30:36.828524   14108 ssh_runner.go:195] Run: rm -f paused
	I0429 19:30:36.978823   14108 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 19:30:36.984290   14108 out.go:177] * Done! kubectl is now configured to use "ha-513500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.701942431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.702035429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.702244725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.719394203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.719863994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.720055890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:22:43 ha-513500 dockerd[1332]: time="2024-04-29T19:22:43.720783976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:31:15 ha-513500 dockerd[1332]: time="2024-04-29T19:31:15.914662473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:31:15 ha-513500 dockerd[1332]: time="2024-04-29T19:31:15.915081269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:31:15 ha-513500 dockerd[1332]: time="2024-04-29T19:31:15.915401666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:31:15 ha-513500 dockerd[1332]: time="2024-04-29T19:31:15.915728063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:31:16 ha-513500 cri-dockerd[1230]: time="2024-04-29T19:31:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/50296d28c3005f998d69bc903c6ea6db48991e8d4409d10633aec53b4aff5d51/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 19:31:17 ha-513500 cri-dockerd[1230]: time="2024-04-29T19:31:17Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 19:31:17 ha-513500 dockerd[1332]: time="2024-04-29T19:31:17.751845061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:31:17 ha-513500 dockerd[1332]: time="2024-04-29T19:31:17.752028161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:31:17 ha-513500 dockerd[1332]: time="2024-04-29T19:31:17.752158561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:31:17 ha-513500 dockerd[1332]: time="2024-04-29T19:31:17.752406260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:32:22 ha-513500 dockerd[1326]: 2024/04/29 19:32:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:32:22 ha-513500 dockerd[1326]: 2024/04/29 19:32:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:32:23 ha-513500 dockerd[1326]: 2024/04/29 19:32:23 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:32:23 ha-513500 dockerd[1326]: 2024/04/29 19:32:23 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:32:23 ha-513500 dockerd[1326]: 2024/04/29 19:32:23 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:32:23 ha-513500 dockerd[1326]: 2024/04/29 19:32:23 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:32:23 ha-513500 dockerd[1326]: 2024/04/29 19:32:23 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:32:23 ha-513500 dockerd[1326]: 2024/04/29 19:32:23 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	31760b27e1330       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   50296d28c3005       busybox-fc5497c4f-k7nt6
	d364c1e6d94f1       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   061b9ae8bb5d4       coredns-7db6d8ff4d-n22jn
	fb655010c9750       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   ec7a4d754b09e       coredns-7db6d8ff4d-5jxcm
	ac90b27682671       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   ab1a16ac763fe       storage-provisioner
	05ddacd92005a       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Running             kindnet-cni               0                   85bfce17a67a6       kindnet-9w6qr
	c0ca10790ffe0       a0bf559e280cf                                                                                         26 minutes ago      Running             kube-proxy                0                   e86da83dd4c8b       kube-proxy-tm7tv
	3174d69f5cd02       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   f9b372bb3f346       kube-vip-ha-513500
	768ab6a9d4e64       259c8277fcbbc                                                                                         27 minutes ago      Running             kube-scheduler            0                   dea83193ee65c       kube-scheduler-ha-513500
	f2d43ad89ec76       c7aad43836fa5                                                                                         27 minutes ago      Running             kube-controller-manager   0                   df7c2aca21ced       kube-controller-manager-ha-513500
	24fcd8dc17cb7       c42f13656d0b2                                                                                         27 minutes ago      Running             kube-apiserver            0                   09e6ad066f403       kube-apiserver-ha-513500
	ddba464c39361       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   26bae3e1dab45       etcd-ha-513500
	
	
	==> coredns [d364c1e6d94f] <==
	[INFO] 10.244.0.4:33090 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002173s
	[INFO] 10.244.2.2:49076 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001087s
	[INFO] 10.244.2.2:55220 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003780794s
	[INFO] 10.244.2.2:38013 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001336s
	[INFO] 10.244.2.2:46025 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002227s
	[INFO] 10.244.2.2:54398 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.002697796s
	[INFO] 10.244.2.2:49424 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000057s
	[INFO] 10.244.2.2:35058 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057999s
	[INFO] 10.244.2.2:36567 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000527s
	[INFO] 10.244.1.2:56534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000299299s
	[INFO] 10.244.1.2:56209 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002045s
	[INFO] 10.244.1.2:46058 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000607s
	[INFO] 10.244.0.4:42958 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268999s
	[INFO] 10.244.0.4:36079 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000558699s
	[INFO] 10.244.0.4:35768 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002419s
	[INFO] 10.244.2.2:38045 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149499s
	[INFO] 10.244.1.2:56344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001919s
	[INFO] 10.244.1.2:34882 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001945s
	[INFO] 10.244.1.2:52415 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001108s
	[INFO] 10.244.0.4:60373 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002066s
	[INFO] 10.244.0.4:39593 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002968s
	[INFO] 10.244.2.2:56962 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001511s
	[INFO] 10.244.1.2:51827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002166s
	[INFO] 10.244.1.2:55197 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000845s
	[INFO] 10.244.1.2:59450 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001143s
	
	
	==> coredns [fb655010c975] <==
	[INFO] 10.244.2.2:54946 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000280099s
	[INFO] 10.244.2.2:58062 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003970695s
	[INFO] 10.244.2.2:51140 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000914s
	[INFO] 10.244.1.2:50028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252599s
	[INFO] 10.244.1.2:60257 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000904s
	[INFO] 10.244.0.4:52149 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.027238658s
	[INFO] 10.244.0.4:58086 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000377799s
	[INFO] 10.244.0.4:36347 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210399s
	[INFO] 10.244.0.4:39627 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004474094s
	[INFO] 10.244.1.2:48954 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00015s
	[INFO] 10.244.1.2:53680 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000647s
	[INFO] 10.244.1.2:46398 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001741s
	[INFO] 10.244.1.2:56009 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002674s
	[INFO] 10.244.1.2:46005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001633s
	[INFO] 10.244.0.4:36504 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000722s
	[INFO] 10.244.2.2:33735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001708s
	[INFO] 10.244.2.2:37320 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063s
	[INFO] 10.244.2.2:47242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000605s
	[INFO] 10.244.1.2:56773 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001174s
	[INFO] 10.244.0.4:58475 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000397999s
	[INFO] 10.244.0.4:58342 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001854s
	[INFO] 10.244.2.2:58709 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001071s
	[INFO] 10.244.2.2:58185 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001712s
	[INFO] 10.244.2.2:43286 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000634s
	[INFO] 10.244.1.2:52086 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000417199s
	
	
	==> describe nodes <==
	Name:               ha-513500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-513500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-513500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T19_22_20_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:22:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-513500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:49:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:46:37 +0000   Mon, 29 Apr 2024 19:22:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:46:37 +0000   Mon, 29 Apr 2024 19:22:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:46:37 +0000   Mon, 29 Apr 2024 19:22:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:46:37 +0000   Mon, 29 Apr 2024 19:22:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.240.42
	  Hostname:    ha-513500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3369ba1532804e80b04fd813c27bd99a
	  System UUID:                1d78230c-499d-7745-aa2e-7c4bf305bc50
	  Boot ID:                    5a7d9e7d-780b-43c5-8522-a1cdbef43f6b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k7nt6              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-5jxcm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-n22jn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-513500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-9w6qr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-513500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-513500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-tm7tv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-513500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-513500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-513500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-513500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-513500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-513500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m   node-controller  Node ha-513500 event: Registered Node ha-513500 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-513500 status is now: NodeReady
	  Normal  RegisteredNode           22m   node-controller  Node ha-513500 event: Registered Node ha-513500 in Controller
	  Normal  RegisteredNode           18m   node-controller  Node ha-513500 event: Registered Node ha-513500 in Controller
	
	
	Name:               ha-513500-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-513500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-513500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_26_24_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:26:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-513500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:48:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 19:46:44 +0000   Mon, 29 Apr 2024 19:48:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 19:46:44 +0000   Mon, 29 Apr 2024 19:48:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 19:46:44 +0000   Mon, 29 Apr 2024 19:48:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 19:46:44 +0000   Mon, 29 Apr 2024 19:48:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.247.146
	  Hostname:    ha-513500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ac51f9045144bd8a5e94498bc8b29b2
	  System UUID:                161b36b5-754a-9741-b399-febb088d3a37
	  Boot ID:                    2849bf10-85a2-4a05-ade6-24e1c44b59eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-txsvr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-513500-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-kdpql                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-513500-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-513500-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-k4l6c                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-513500-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-513500-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-513500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-513500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-513500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-513500-m02 event: Registered Node ha-513500-m02 in Controller
	  Normal  RegisteredNode           22m                node-controller  Node ha-513500-m02 event: Registered Node ha-513500-m02 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-513500-m02 event: Registered Node ha-513500-m02 in Controller
	  Normal  NodeNotReady             32s                node-controller  Node ha-513500-m02 status is now: NodeNotReady
	
	
	Name:               ha-513500-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-513500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-513500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_30_19_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:30:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-513500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:49:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:47:00 +0000   Mon, 29 Apr 2024 19:30:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:47:00 +0000   Mon, 29 Apr 2024 19:30:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:47:00 +0000   Mon, 29 Apr 2024 19:30:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:47:00 +0000   Mon, 29 Apr 2024 19:30:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.246.101
	  Hostname:    ha-513500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 352c8de6680a4d82b936c28b2c2b4af4
	  System UUID:                22f68df9-d6ea-da42-b3a5-feb527052c05
	  Boot ID:                    3d436fce-cbcf-4e43-a244-5b32e568972d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k7rdw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-513500-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-9tv8w                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-513500-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-513500-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-s7ddt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-513500-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-513500-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  RegisteredNode           19m                node-controller  Node ha-513500-m03 event: Registered Node ha-513500-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-513500-m03 event: Registered Node ha-513500-m03 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node ha-513500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node ha-513500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node ha-513500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                19m                kubelet          Node ha-513500-m03 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node ha-513500-m03 event: Registered Node ha-513500-m03 in Controller
	
	
	Name:               ha-513500-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-513500-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-513500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_35_38_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:35:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-513500-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:49:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:46:19 +0000   Mon, 29 Apr 2024 19:35:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:46:19 +0000   Mon, 29 Apr 2024 19:35:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:46:19 +0000   Mon, 29 Apr 2024 19:35:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:46:19 +0000   Mon, 29 Apr 2024 19:36:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.240.155
	  Hostname:    ha-513500-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 67ac2b26edde4bfd9a41dcfc789003cc
	  System UUID:                c9b52dfb-f11e-2746-b991-bdd0eea2e412
	  Boot ID:                    61f9be4b-63cb-4ec5-aa9f-69aa6aecd475
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c8nv6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-c7z5z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-513500-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-513500-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-513500-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-513500-m04 event: Registered Node ha-513500-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-513500-m04 event: Registered Node ha-513500-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-513500-m04 event: Registered Node ha-513500-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-513500-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.439818] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 19:21] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.189015] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +31.573298] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.108323] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.582257] systemd-fstab-generator[984]: Ignoring "noauto" option for root device
	[  +0.210246] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.257185] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +2.920377] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.213026] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.228538] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.302335] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.648892] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.109564] kauditd_printk_skb: 205 callbacks suppressed
	[Apr29 19:22] systemd-fstab-generator[1523]: Ignoring "noauto" option for root device
	[  +6.088578] systemd-fstab-generator[1718]: Ignoring "noauto" option for root device
	[  +0.121125] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.996068] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.426455] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[ +14.311868] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.109377] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.382854] kauditd_printk_skb: 33 callbacks suppressed
	[Apr29 19:26] hrtimer: interrupt took 6254643 ns
	
	
	==> etcd [ddba464c3936] <==
	{"level":"warn","ts":"2024-04-29T19:49:30.985341Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:30.993631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.010406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.023095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.061762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.067047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.080041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.087995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.106052Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.131471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.142789Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.155503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.161016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.167126Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.179453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.191404Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.203505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.21006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.216935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.226356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.234205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.238982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.250138Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.264202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:49:31.320854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3ede46bdb03fb638","from":"3ede46bdb03fb638","remote-peer-id":"a30859e8a544b3c9","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:49:31 up 29 min,  0 users,  load average: 0.31, 0.37, 0.38
	Linux ha-513500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [05ddacd92005] <==
	I0429 19:48:53.987383       1 main.go:250] Node ha-513500-m04 has CIDR [10.244.3.0/24] 
	I0429 19:49:04.003899       1 main.go:223] Handling node with IPs: map[172.17.240.42:{}]
	I0429 19:49:04.004006       1 main.go:227] handling current node
	I0429 19:49:04.004022       1 main.go:223] Handling node with IPs: map[172.17.247.146:{}]
	I0429 19:49:04.004031       1 main.go:250] Node ha-513500-m02 has CIDR [10.244.1.0/24] 
	I0429 19:49:04.004900       1 main.go:223] Handling node with IPs: map[172.17.246.101:{}]
	I0429 19:49:04.004920       1 main.go:250] Node ha-513500-m03 has CIDR [10.244.2.0/24] 
	I0429 19:49:04.005137       1 main.go:223] Handling node with IPs: map[172.17.240.155:{}]
	I0429 19:49:04.005176       1 main.go:250] Node ha-513500-m04 has CIDR [10.244.3.0/24] 
	I0429 19:49:14.021079       1 main.go:223] Handling node with IPs: map[172.17.240.42:{}]
	I0429 19:49:14.021109       1 main.go:227] handling current node
	I0429 19:49:14.021121       1 main.go:223] Handling node with IPs: map[172.17.247.146:{}]
	I0429 19:49:14.021128       1 main.go:250] Node ha-513500-m02 has CIDR [10.244.1.0/24] 
	I0429 19:49:14.021733       1 main.go:223] Handling node with IPs: map[172.17.246.101:{}]
	I0429 19:49:14.021826       1 main.go:250] Node ha-513500-m03 has CIDR [10.244.2.0/24] 
	I0429 19:49:14.021908       1 main.go:223] Handling node with IPs: map[172.17.240.155:{}]
	I0429 19:49:14.021940       1 main.go:250] Node ha-513500-m04 has CIDR [10.244.3.0/24] 
	I0429 19:49:24.031586       1 main.go:223] Handling node with IPs: map[172.17.240.42:{}]
	I0429 19:49:24.031781       1 main.go:227] handling current node
	I0429 19:49:24.031803       1 main.go:223] Handling node with IPs: map[172.17.247.146:{}]
	I0429 19:49:24.031840       1 main.go:250] Node ha-513500-m02 has CIDR [10.244.1.0/24] 
	I0429 19:49:24.032023       1 main.go:223] Handling node with IPs: map[172.17.246.101:{}]
	I0429 19:49:24.032126       1 main.go:250] Node ha-513500-m03 has CIDR [10.244.2.0/24] 
	I0429 19:49:24.032212       1 main.go:223] Handling node with IPs: map[172.17.240.155:{}]
	I0429 19:49:24.032222       1 main.go:250] Node ha-513500-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [24fcd8dc17cb] <==
	Trace[1804058196]: ---"About to write a response" 563ms (19:35:48.532)
	Trace[1804058196]: [563.658826ms] [563.658826ms] END
	I0429 19:35:48.542029       1 trace.go:236] Trace[958614674]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7daf0287-0a99-406b-9c34-132eae4e561a,client:172.17.240.155,api-group:coordination.k8s.io,api-version:v1,name:ha-513500-m04,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-513500-m04,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (29-Apr-2024 19:35:48.012) (total time: 526ms):
	Trace[958614674]: ["GuaranteedUpdate etcd3" audit-id:7daf0287-0a99-406b-9c34-132eae4e561a,key:/leases/kube-node-lease/ha-513500-m04,type:*coordination.Lease,resource:leases.coordination.k8s.io 529ms (19:35:48.012)
	Trace[958614674]:  ---"Txn call completed" 524ms (19:35:48.538)]
	Trace[958614674]: [526.204796ms] [526.204796ms] END
	I0429 19:35:48.543559       1 trace.go:236] Trace[495979905]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.17.240.42,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 19:35:47.789) (total time: 754ms):
	Trace[495979905]: ---"Transaction prepared" 480ms (19:35:48.273)
	Trace[495979905]: ---"Txn call completed" 269ms (19:35:48.543)
	Trace[495979905]: [754.126852ms] [754.126852ms] END
	I0429 19:48:45.022624       1 trace.go:236] Trace[1217428796]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:11eea44c-c5a1-4e2d-9c3e-0dd9499dc556,client:172.17.246.101,api-group:coordination.k8s.io,api-version:v1,name:ha-513500-m03,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-513500-m03,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (29-Apr-2024 19:48:44.357) (total time: 664ms):
	Trace[1217428796]: ["GuaranteedUpdate etcd3" audit-id:11eea44c-c5a1-4e2d-9c3e-0dd9499dc556,key:/leases/kube-node-lease/ha-513500-m03,type:*coordination.Lease,resource:leases.coordination.k8s.io 664ms (19:48:44.358)
	Trace[1217428796]:  ---"Txn call completed" 663ms (19:48:45.022)]
	Trace[1217428796]: [664.732418ms] [664.732418ms] END
	I0429 19:48:45.026924       1 trace.go:236] Trace[1334913307]: "Get" accept:application/json, */*,audit-id:df5f7731-6685-4868-9caf-95de1e03db98,client:172.17.240.42,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Apr-2024 19:48:44.367) (total time: 659ms):
	Trace[1334913307]: ---"About to write a response" 659ms (19:48:45.026)
	Trace[1334913307]: [659.640188ms] [659.640188ms] END
	I0429 19:48:45.631663       1 trace.go:236] Trace[2033793127]: "Update" accept:application/json, */*,audit-id:d5949106-9331-4dcf-8e2b-4c4f310dfe5f,client:172.17.240.42,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 19:48:45.030) (total time: 600ms):
	Trace[2033793127]: ["GuaranteedUpdate etcd3" audit-id:d5949106-9331-4dcf-8e2b-4c4f310dfe5f,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 600ms (19:48:45.031)
	Trace[2033793127]:  ---"Txn call completed" 599ms (19:48:45.631)]
	Trace[2033793127]: [600.8098ms] [600.8098ms] END
	I0429 19:48:48.625635       1 trace.go:236] Trace[1866506176]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.17.240.42,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 19:48:47.854) (total time: 770ms):
	Trace[1866506176]: ---"Transaction prepared" 403ms (19:48:48.260)
	Trace[1866506176]: ---"Txn call completed" 364ms (19:48:48.625)
	Trace[1866506176]: [770.98364ms] [770.98364ms] END
	
	
	==> kube-controller-manager [f2d43ad89ec7] <==
	I0429 19:26:19.523490       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-513500-m02\" does not exist"
	I0429 19:26:19.539501       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-513500-m02" podCIDRs=["10.244.1.0/24"]
	I0429 19:26:21.558506       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-513500-m02"
	I0429 19:30:11.531218       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-513500-m03\" does not exist"
	I0429 19:30:11.564113       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-513500-m03" podCIDRs=["10.244.2.0/24"]
	I0429 19:30:11.605524       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-513500-m03"
	I0429 19:31:14.980963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.830652ms"
	I0429 19:31:15.180461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="199.10268ms"
	I0429 19:31:15.504804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="324.26947ms"
	I0429 19:31:15.656125       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.25264ms"
	I0429 19:31:15.656568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="357.197µs"
	I0429 19:31:18.248560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.324252ms"
	I0429 19:31:18.249018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.2µs"
	I0429 19:31:18.506585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.835664ms"
	I0429 19:31:18.508020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.3µs"
	I0429 19:31:18.590521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.439075ms"
	I0429 19:31:18.592258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.7µs"
	E0429 19:35:37.623401       1 certificate_controller.go:146] Sync csr-crhqm failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-crhqm": the object has been modified; please apply your changes to the latest version and try again
	I0429 19:35:37.686422       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-513500-m04\" does not exist"
	I0429 19:35:37.707483       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-513500-m04" podCIDRs=["10.244.3.0/24"]
	I0429 19:35:42.241074       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-513500-m04"
	I0429 19:36:00.852842       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-513500-m04"
	I0429 19:48:59.565828       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-513500-m04"
	I0429 19:48:59.734662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.089366ms"
	I0429 19:48:59.735020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="136.398µs"
	
	
	==> kube-proxy [c0ca10790ffe] <==
	I0429 19:22:34.055101       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:22:34.089710       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.240.42"]
	I0429 19:22:34.143942       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:22:34.144039       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:22:34.144064       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:22:34.151484       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:22:34.152452       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:22:34.152502       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:22:34.159175       1 config.go:192] "Starting service config controller"
	I0429 19:22:34.159944       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:22:34.159998       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:22:34.160006       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:22:34.163187       1 config.go:319] "Starting node config controller"
	I0429 19:22:34.163226       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:22:34.260818       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:22:34.260751       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:22:34.264047       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [768ab6a9d4e6] <==
	W0429 19:22:16.282776       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 19:22:16.283059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 19:22:16.342538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 19:22:16.342776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 19:22:16.349978       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:22:16.350032       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 19:22:16.410571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:22:16.411241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:22:16.519007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:22:16.519170       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:22:16.556273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 19:22:16.556720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 19:22:16.666413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:22:16.667013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 19:22:16.802894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 19:22:16.803021       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 19:22:16.836025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:22:16.836503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:22:16.901987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 19:22:16.902577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 19:22:19.381948       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 19:30:11.647198       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9tv8w\": pod kindnet-9tv8w is already assigned to node \"ha-513500-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-9tv8w" node="ha-513500-m03"
	E0429 19:30:11.647661       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 28dad06a-bed9-4b9c-a3b6-df814e1f3d7b(kube-system/kindnet-9tv8w) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9tv8w"
	E0429 19:30:11.647976       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9tv8w\": pod kindnet-9tv8w is already assigned to node \"ha-513500-m03\"" pod="kube-system/kindnet-9tv8w"
	I0429 19:30:11.648139       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9tv8w" node="ha-513500-m03"
	
	
	==> kubelet <==
	Apr 29 19:45:19 ha-513500 kubelet[2212]: E0429 19:45:19.580098    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:45:19 ha-513500 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:45:19 ha-513500 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:45:19 ha-513500 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:45:19 ha-513500 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:46:19 ha-513500 kubelet[2212]: E0429 19:46:19.582486    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:46:19 ha-513500 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:46:19 ha-513500 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:46:19 ha-513500 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:46:19 ha-513500 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:47:19 ha-513500 kubelet[2212]: E0429 19:47:19.582469    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:47:19 ha-513500 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:47:19 ha-513500 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:47:19 ha-513500 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:47:19 ha-513500 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:48:19 ha-513500 kubelet[2212]: E0429 19:48:19.581890    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:48:19 ha-513500 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:48:19 ha-513500 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:48:19 ha-513500 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:48:19 ha-513500 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:49:19 ha-513500 kubelet[2212]: E0429 19:49:19.580197    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:49:19 ha-513500 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:49:19 ha-513500 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:49:19 ha-513500 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:49:19 ha-513500 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:49:22.649394   12272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-513500 -n ha-513500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-513500 -n ha-513500: (12.5217905s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-513500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (98.48s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (190.45s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-089600
E0429 20:18:10.214315   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 20:20:23.996522   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-089600: exit status 90 (2m58.2991316s)

                                                
                                                
-- stdout --
	* [mount-start-2-089600] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-089600
	* Restarting existing hyperv VM for "mount-start-2-089600" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:17:47.072609    6448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:19:15 mount-start-2-089600 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:19:15 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:15.948137248Z" level=info msg="Starting up"
	Apr 29 20:19:15 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:15.949490074Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:19:15 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:15.950763199Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 20:19:15 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:15.988910940Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.023542286Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.023665888Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.023788890Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.023853191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.024578804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.024715607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.024920611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.025177915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.025203416Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.025217816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.025945529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.026784545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.030019903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.030152306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.030342109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.030438911Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.031082923Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.031201025Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.031219725Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.033046959Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.033180161Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.033205861Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.033223862Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.033239862Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.033318763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.033852873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034103678Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034234880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034255881Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034271481Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034292881Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034306681Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034322082Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034345482Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034363983Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034377683Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034389683Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034410783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034427784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034443984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034458084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034475785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034490585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034507185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034521185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034538486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034560286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034575086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034587487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034603287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034621987Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034644488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034657488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034673488Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034765690Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034817391Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034831191Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034846091Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.034971194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.035018394Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.035034395Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.035362401Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.035524504Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.035598405Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:19:16 mount-start-2-089600 dockerd[668]: time="2024-04-29T20:19:16.035735107Z" level=info msg="containerd successfully booted in 0.049982s"
	Apr 29 20:19:17 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:17.003428519Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:19:17 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:17.033507532Z" level=info msg="Loading containers: start."
	Apr 29 20:19:17 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:17.303156650Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 20:19:17 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:17.396364070Z" level=info msg="Loading containers: done."
	Apr 29 20:19:17 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:17.421822232Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:19:17 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:17.422751389Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:19:17 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:17.484384072Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:19:17 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:17.485057513Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:19:17 mount-start-2-089600 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:19:44 mount-start-2-089600 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:19:44 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:44.042785083Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:19:44 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:44.044571062Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:19:44 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:44.045467102Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:19:44 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:44.045637209Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:19:44 mount-start-2-089600 dockerd[662]: time="2024-04-29T20:19:44.045685311Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:19:45 mount-start-2-089600 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:19:45 mount-start-2-089600 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:19:45 mount-start-2-089600 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:19:45 mount-start-2-089600 dockerd[1039]: time="2024-04-29T20:19:45.133832525Z" level=info msg="Starting up"
	Apr 29 20:20:45 mount-start-2-089600 dockerd[1039]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:20:45 mount-start-2-089600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:20:45 mount-start-2-089600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:20:45 mount-start-2-089600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:168: restart failed: "out/minikube-windows-amd64.exe start -p mount-start-2-089600" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-089600 -n mount-start-2-089600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-089600 -n mount-start-2-089600: exit status 6 (12.1432772s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:20:45.385760   13032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0429 20:20:57.315758   13032 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-089600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-089600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/RestartStopped (190.45s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (466.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-515700 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0429 20:23:10.214288   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 20:25:23.993850   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 20:26:47.227972   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 20:28:10.222691   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-515700 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: exit status 90 (7m9.6989646s)

                                                
                                                
-- stdout --
	* [multinode-515700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "multinode-515700" primary control-plane node in "multinode-515700" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "multinode-515700-m02" worker node in "multinode-515700" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=172.17.241.25
	  - NO_PROXY=172.17.241.25
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:22:01.338789    6560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 20:22:01.431751    6560 out.go:291] Setting OutFile to fd 1000 ...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.432590    6560 out.go:304] Setting ErrFile to fd 1156...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.463325    6560 out.go:298] Setting JSON to false
	I0429 20:22:01.467738    6560 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24060,"bootTime":1714398060,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 20:22:01.467738    6560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 20:22:01.473386    6560 out.go:177] * [multinode-515700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 20:22:01.477900    6560 notify.go:220] Checking for updates...
	I0429 20:22:01.480328    6560 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:22:01.485602    6560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:22:01.488123    6560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 20:22:01.490657    6560 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:22:01.493249    6560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:22:01.496241    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:22:01.497610    6560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:22:06.930154    6560 out.go:177] * Using the hyperv driver based on user configuration
	I0429 20:22:06.933587    6560 start.go:297] selected driver: hyperv
	I0429 20:22:06.933587    6560 start.go:901] validating driver "hyperv" against <nil>
	I0429 20:22:06.933587    6560 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:22:06.986262    6560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 20:22:06.987723    6560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:22:06.988334    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:22:06.988334    6560 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 20:22:06.988334    6560 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 20:22:06.988334    6560 start.go:340] cluster config:
	{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:22:06.988334    6560 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:22:06.992867    6560 out.go:177] * Starting "multinode-515700" primary control-plane node in "multinode-515700" cluster
	I0429 20:22:06.995976    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:22:06.996499    6560 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 20:22:06.996703    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:22:06.996741    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:22:06.996741    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:22:06.996741    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:22:06.996741    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json: {Name:mkdf346f9e30a055d7c79ffb416c8ce539e0c5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:360] acquireMachinesLock for multinode-515700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700"
	I0429 20:22:06.999081    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:22:06.999081    6560 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 20:22:07.006481    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:22:07.006790    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:22:07.006790    6560 client.go:168] LocalClient.Create starting
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:22:09.217702    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:22:09.217822    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:09.217951    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:22:11.056235    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:12.618512    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:16.461966    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:22:17.019827    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: Creating VM...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:20.140355    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:22:20.140483    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:22.004896    6560 main.go:141] libmachine: Creating VHD
	I0429 20:22:22.004896    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:22:25.795387    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DA11902-3EE7-4F99-A00A-752C0686FD99
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:22:25.796445    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:25.796496    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:22:25.796702    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:22:25.814462    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:22:29.034595    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:29.035273    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:29.035337    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -SizeBytes 20000MB
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:31.671427    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-515700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:35.461856    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700 -DynamicMemoryEnabled $false
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:37.723890    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700 -Count 2
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\boot2docker.iso'
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:42.558432    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd'
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:45.265400    6560 main.go:141] libmachine: Starting VM...
	I0429 20:22:45.265400    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:50.732199    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:50.733048    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:50.733149    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:54.308058    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:56.517062    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:59.110985    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:59.111613    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:00.127675    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:02.349860    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:05.987459    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:08.224322    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:10.790333    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:10.791338    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:11.803237    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:14.061252    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:18.855659    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:23:18.855911    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:21.063683    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:23.697335    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:23.697580    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:23.703285    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:23.713965    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:23.713965    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:23:23.854760    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:23:23.854760    6560 buildroot.go:166] provisioning hostname "multinode-515700"
	I0429 20:23:23.854760    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:26.029157    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:26.029995    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:26.030093    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:28.624899    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:28.625217    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:28.625495    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700 && echo "multinode-515700" | sudo tee /etc/hostname
	I0429 20:23:28.799265    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700
	
	I0429 20:23:28.799376    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:30.924333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:33.588985    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:33.589381    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:33.589381    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:23:33.743242    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:23:33.743242    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:23:33.743242    6560 buildroot.go:174] setting up certificates
	I0429 20:23:33.743242    6560 provision.go:84] configureAuth start
	I0429 20:23:33.743939    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:35.885562    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:38.477298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:40.581307    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:43.165623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:43.165853    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:43.165933    6560 provision.go:143] copyHostCerts
	I0429 20:23:43.166093    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:23:43.166093    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:23:43.166093    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:23:43.166722    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:23:43.168141    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:23:43.168305    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:23:43.168305    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:23:43.168887    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:23:43.169614    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:23:43.170245    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:23:43.170340    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:23:43.170731    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:23:43.171712    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700 san=[127.0.0.1 172.17.241.25 localhost minikube multinode-515700]
	I0429 20:23:43.368646    6560 provision.go:177] copyRemoteCerts
	I0429 20:23:43.382882    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:23:43.382882    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:45.539057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:48.109324    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:23:48.217340    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8343588s)
	I0429 20:23:48.217478    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:23:48.218375    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:23:48.267636    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:23:48.267636    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 20:23:48.316493    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:23:48.316784    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:23:48.372851    6560 provision.go:87] duration metric: took 14.6294509s to configureAuth
	I0429 20:23:48.372952    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:23:48.373086    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:23:48.373086    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:50.522765    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:50.522998    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:50.523146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:53.169650    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:53.170462    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:53.170462    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:23:53.302673    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:23:53.302726    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:23:53.302726    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:23:53.302726    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:55.434984    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:58.060160    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:58.061082    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:58.067077    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:58.068199    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:58.068292    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:23:58.226608    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:23:58.227212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:00.358933    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:02.944293    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:02.944373    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:02.950227    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:02.950958    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:02.950958    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:24:05.224184    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:24:05.224184    6560 machine.go:97] duration metric: took 46.3681587s to provisionDockerMachine
	I0429 20:24:05.224184    6560 client.go:171] duration metric: took 1m58.2164577s to LocalClient.Create
	I0429 20:24:05.224184    6560 start.go:167] duration metric: took 1m58.2164577s to libmachine.API.Create "multinode-515700"
	I0429 20:24:05.224184    6560 start.go:293] postStartSetup for "multinode-515700" (driver="hyperv")
	I0429 20:24:05.224184    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:24:05.241199    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:24:05.241199    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:07.393879    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:09.983789    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:09.984033    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:09.984469    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:10.092254    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8510176s)
	I0429 20:24:10.107982    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:24:10.116700    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:24:10.116700    6560 command_runner.go:130] > ID=buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:24:10.116700    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:24:10.116700    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:24:10.116700    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:24:10.117268    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:24:10.118515    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:24:10.118515    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:24:10.132514    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:24:10.152888    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:24:10.201665    6560 start.go:296] duration metric: took 4.9774423s for postStartSetup
	I0429 20:24:10.204966    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:12.345708    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:12.345785    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:12.345855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:14.957675    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:24:14.960758    6560 start.go:128] duration metric: took 2m7.9606641s to createHost
	I0429 20:24:14.962026    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:17.100197    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:17.100281    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:17.100354    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:19.725196    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:19.725860    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:19.725860    6560 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 20:24:19.867560    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422259.868914581
	
	I0429 20:24:19.867560    6560 fix.go:216] guest clock: 1714422259.868914581
	I0429 20:24:19.867694    6560 fix.go:229] Guest: 2024-04-29 20:24:19.868914581 +0000 UTC Remote: 2024-04-29 20:24:14.9613787 +0000 UTC m=+133.724240401 (delta=4.907535881s)
	I0429 20:24:19.867694    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:22.005967    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:24.588016    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:24.588016    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:24.588016    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422259
	I0429 20:24:24.741766    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:24:19 UTC 2024
	
	I0429 20:24:24.741837    6560 fix.go:236] clock set: Mon Apr 29 20:24:19 UTC 2024
	 (err=<nil>)
	I0429 20:24:24.741837    6560 start.go:83] releasing machines lock for "multinode-515700", held for 2m17.7427319s
	I0429 20:24:24.742129    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:26.884301    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:29.475377    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:29.476046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:29.480912    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:24:29.481639    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:29.493304    6560 ssh_runner.go:195] Run: cat /version.json
	I0429 20:24:29.493304    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:31.702922    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:34.435635    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.436190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.436258    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.480228    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.481073    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.481135    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.531424    6560 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 20:24:34.531720    6560 ssh_runner.go:235] Completed: cat /version.json: (5.0383759s)
	I0429 20:24:34.545943    6560 ssh_runner.go:195] Run: systemctl --version
	I0429 20:24:34.614256    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:24:34.615354    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1343125s)
	I0429 20:24:34.615354    6560 command_runner.go:130] > systemd 252 (252)
	I0429 20:24:34.615354    6560 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 20:24:34.630005    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:24:34.639051    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 20:24:34.639955    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:24:34.653590    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:24:34.683800    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:24:34.683903    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:24:34.683903    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:34.684139    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:34.720958    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:24:34.735137    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:24:34.769077    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:24:34.791121    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:24:34.804751    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:24:34.838781    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.871052    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:24:34.905043    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.940043    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:24:34.975295    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:24:35.009502    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:24:35.044104    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:24:35.078095    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:24:35.099570    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:24:35.114246    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:24:35.146794    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:35.365920    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:24:35.402710    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:35.417050    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Unit]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:24:35.443946    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:24:35.443946    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:24:35.443946    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Service]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Type=notify
	I0429 20:24:35.443946    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:24:35.443946    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:24:35.443946    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:24:35.443946    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:24:35.443946    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:24:35.443946    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:24:35.443946    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:24:35.443946    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:24:35.443946    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:24:35.443946    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:24:35.443946    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:24:35.443946    6560 command_runner.go:130] > Delegate=yes
	I0429 20:24:35.443946    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:24:35.443946    6560 command_runner.go:130] > KillMode=process
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Install]
	I0429 20:24:35.444947    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:24:35.457957    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.500818    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:24:35.548559    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.585869    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.622879    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:24:35.694256    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.721660    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:35.757211    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:24:35.773795    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:24:35.779277    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:24:35.793892    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:24:35.813834    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:24:35.865638    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:24:36.085117    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:24:36.291781    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:24:36.291781    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:24:36.337637    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:36.567033    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:39.106704    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5396504s)
	I0429 20:24:39.121937    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 20:24:39.164421    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.201973    6560 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 20:24:39.432817    6560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 20:24:39.648494    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:39.872471    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 20:24:39.918782    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.959078    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:40.189711    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 20:24:40.314827    6560 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 20:24:40.327765    6560 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 20:24:40.337989    6560 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 20:24:40.338077    6560 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 20:24:40.338077    6560 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Modify: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Change: 2024-04-29 20:24:40.227771386 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] >  Birth: -
	I0429 20:24:40.338228    6560 start.go:562] Will wait 60s for crictl version
	I0429 20:24:40.353543    6560 ssh_runner.go:195] Run: which crictl
	I0429 20:24:40.359551    6560 command_runner.go:130] > /usr/bin/crictl
	I0429 20:24:40.372542    6560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:24:40.422534    6560 command_runner.go:130] > Version:  0.1.0
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeName:  docker
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 20:24:40.422534    6560 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 20:24:40.433531    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.468470    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.477791    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.510922    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.518057    6560 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 20:24:40.518283    6560 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: 172.17.240.1/20
	I0429 20:24:40.538782    6560 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 20:24:40.546082    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:40.569927    6560 kubeadm.go:877] updating cluster {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:24:40.570125    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:24:40.581034    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:40.605162    6560 docker.go:685] Got preloaded images: 
	I0429 20:24:40.605162    6560 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 20:24:40.617894    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:40.637456    6560 command_runner.go:139] > {"Repositories":{}}
	I0429 20:24:40.652557    6560 ssh_runner.go:195] Run: which lz4
	I0429 20:24:40.659728    6560 command_runner.go:130] > /usr/bin/lz4
	I0429 20:24:40.659728    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 20:24:40.676390    6560 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0429 20:24:40.682600    6560 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 20:24:43.151463    6560 docker.go:649] duration metric: took 2.4917153s to copy over tarball
	I0429 20:24:43.166991    6560 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:24:51.777678    6560 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6106197s)
	I0429 20:24:51.777678    6560 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:24:51.848689    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:51.869772    6560 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0429 20:24:51.869772    6560 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 20:24:51.923721    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:52.150884    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:55.504316    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3534062s)
	I0429 20:24:55.515091    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:24:55.540357    6560 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 20:24:55.540357    6560 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:24:55.540557    6560 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 20:24:55.540557    6560 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:24:55.540557    6560 kubeadm.go:928] updating node { 172.17.241.25 8443 v1.30.0 docker true true} ...
	I0429 20:24:55.540557    6560 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-515700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.241.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:24:55.550945    6560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 20:24:55.586940    6560 command_runner.go:130] > cgroupfs
	I0429 20:24:55.587354    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:24:55.587354    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:24:55.587354    6560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:24:55.587354    6560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.241.25 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-515700 NodeName:multinode-515700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.241.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.241.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:24:55.587882    6560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.241.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-515700"
	  kubeletExtraArgs:
	    node-ip: 172.17.241.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.241.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:24:55.601173    6560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubeadm
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubectl
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubelet
	I0429 20:24:55.622022    6560 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:24:55.633924    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:24:55.654273    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 20:24:55.692289    6560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:24:55.726319    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:24:55.774801    6560 ssh_runner.go:195] Run: grep 172.17.241.25	control-plane.minikube.internal$ /etc/hosts
	I0429 20:24:55.781653    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.241.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:55.820570    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:56.051044    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:24:56.087660    6560 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700 for IP: 172.17.241.25
	I0429 20:24:56.087753    6560 certs.go:194] generating shared ca certs ...
	I0429 20:24:56.087824    6560 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 20:24:56.089063    6560 certs.go:256] generating profile certs ...
	I0429 20:24:56.089855    6560 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key
	I0429 20:24:56.089855    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt with IP's: []
	I0429 20:24:56.283640    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt ...
	I0429 20:24:56.284633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt: {Name:mk1286f657dae134d1e4806ec4fc1d780c02f0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.285633    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key ...
	I0429 20:24:56.285633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key: {Name:mka98d4501f3f942abed1092b1c97c4a2bbd30cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.286633    6560 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d
	I0429 20:24:56.287300    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.241.25]
	I0429 20:24:56.456862    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d ...
	I0429 20:24:56.456862    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d: {Name:mk09d828589f59d94791e90fc999c9ce1101118e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d ...
	I0429 20:24:56.458476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d: {Name:mk92ebf0409a99e4a3e3430ff86080f164f4bc0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458796    6560 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt
	I0429 20:24:56.473961    6560 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key
	I0429 20:24:56.474965    6560 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key
	I0429 20:24:56.474965    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt with IP's: []
	I0429 20:24:56.680472    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt ...
	I0429 20:24:56.680472    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt: {Name:mkc600562c7738e3eec9de4025428e3048df463a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.682476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key ...
	I0429 20:24:56.682476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key: {Name:mkc9ba6e1afbc9ca05ce8802b568a72bfd19a90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 20:24:56.685482    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 20:24:56.693323    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 20:24:56.701358    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 20:24:56.702409    6560 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 20:24:56.702718    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 20:24:56.702843    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 20:24:56.705315    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:24:56.758912    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 20:24:56.809584    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:24:56.860874    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:24:56.918708    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:24:56.969377    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:24:57.018903    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:24:57.070438    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:24:57.119823    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:24:57.168671    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 20:24:57.216697    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 20:24:57.263605    6560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:24:57.314590    6560 ssh_runner.go:195] Run: openssl version
	I0429 20:24:57.325614    6560 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 20:24:57.340061    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:24:57.374639    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.394971    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.404667    6560 command_runner.go:130] > b5213941
	I0429 20:24:57.419162    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:24:57.454540    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 20:24:57.494441    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.501867    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.502224    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.517134    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.527174    6560 command_runner.go:130] > 51391683
	I0429 20:24:57.544472    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 20:24:57.579789    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 20:24:57.613535    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622605    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622696    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.637764    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.649176    6560 command_runner.go:130] > 3ec20f2e
	I0429 20:24:57.665410    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:24:57.708796    6560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:24:57.716466    6560 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717133    6560 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717510    6560 kubeadm.go:391] StartCluster: {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:24:57.729105    6560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 20:24:57.771112    6560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 20:24:57.792952    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0429 20:24:57.807601    6560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:24:57.837965    6560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 20:24:57.856820    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:156] found existing configuration files:
	
	I0429 20:24:57.872870    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:24:57.892109    6560 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.892549    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.905782    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:24:57.939062    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:24:57.957607    6560 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.957753    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.972479    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:24:58.006849    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.025918    6560 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.025918    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.039054    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.072026    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:24:58.092314    6560 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.092673    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.105776    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:24:58.124274    6560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:24:58.562957    6560 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:24:58.562957    6560 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:25:12.186137    6560 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186137    6560 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186277    6560 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 20:25:12.186320    6560 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:25:12.186516    6560 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.187085    6560 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.187131    6560 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.190071    6560 out.go:204]   - Generating certificates and keys ...
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190667    6560 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.191251    6560 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191251    6560 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 20:25:12.193040    6560 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193086    6560 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193701    6560 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193701    6560 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.198949    6560 out.go:204]   - Booting up control plane ...
	I0429 20:25:12.193843    6560 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199855    6560 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:25:12.200494    6560 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200494    6560 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201207    6560 command_runner.go:130] > [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.202201    6560 command_runner.go:130] > [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 out.go:204]   - Configuring RBAC rules ...
	I0429 20:25:12.202201    6560 command_runner.go:130] > [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.204361    6560 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.206433    6560 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206433    6560 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206983    6560 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 kubeadm.go:309] 
	I0429 20:25:12.207142    6560 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 kubeadm.go:309] 
	I0429 20:25:12.207365    6560 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207404    6560 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207464    6560 kubeadm.go:309] 
	I0429 20:25:12.207514    6560 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0429 20:25:12.207589    6560 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:25:12.207764    6560 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.207807    6560 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.208030    6560 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 kubeadm.go:309] 
	I0429 20:25:12.208230    6560 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208230    6560 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208281    6560 kubeadm.go:309] 
	I0429 20:25:12.208375    6560 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208375    6560 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208442    6560 kubeadm.go:309] 
	I0429 20:25:12.208643    6560 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0429 20:25:12.208733    6560 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:25:12.208874    6560 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.208936    6560 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.209014    6560 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209014    6560 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209735    6560 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209735    6560 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--control-plane 
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--control-plane 
	I0429 20:25:12.210277    6560 kubeadm.go:309] 
	I0429 20:25:12.210538    6560 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] 
	I0429 20:25:12.210726    6560 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210726    6560 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210937    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:25:12.211197    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:25:12.215717    6560 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 20:25:12.234164    6560 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 20:25:12.242817    6560 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: 2024-04-29 20:23:14.801002600 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Change: 2024-04-29 20:23:06.257000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] >  Birth: -
	I0429 20:25:12.242817    6560 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 20:25:12.242817    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 20:25:12.301387    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 20:25:13.060621    6560 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > serviceaccount/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > daemonset.apps/kindnet created
	I0429 20:25:13.060707    6560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-515700 minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=multinode-515700 minikube.k8s.io/primary=true
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.092072    6560 command_runner.go:130] > -16
	I0429 20:25:13.092113    6560 ops.go:34] apiserver oom_adj: -16
	I0429 20:25:13.290753    6560 command_runner.go:130] > node/multinode-515700 labeled
	I0429 20:25:13.292700    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0429 20:25:13.306335    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.426974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:13.819653    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.947766    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.320587    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.442246    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.822864    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.943107    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.309117    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.432718    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.814070    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.933861    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.317878    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.440680    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.819594    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.942387    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.322995    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.435199    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.809136    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.932465    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.308164    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.429047    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.808817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.928476    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.310090    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.432479    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.815590    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.929079    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.321723    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.442512    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.819466    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.933742    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.309314    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.424974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.811819    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.952603    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.316794    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.432125    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.808890    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.925838    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.310021    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.434432    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.819369    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.948876    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.307817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.457947    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.818980    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.932003    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:25.308659    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:25.488149    6560 command_runner.go:130] > NAME      SECRETS   AGE
	I0429 20:25:25.488217    6560 command_runner.go:130] > default   0         1s
	I0429 20:25:25.489686    6560 kubeadm.go:1107] duration metric: took 12.4288824s to wait for elevateKubeSystemPrivileges
	W0429 20:25:25.489686    6560 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:25:25.489686    6560 kubeadm.go:393] duration metric: took 27.7719601s to StartCluster
	I0429 20:25:25.490694    6560 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.490694    6560 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:25.491677    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.493697    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 20:25:25.493697    6560 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:25:25.498680    6560 out.go:177] * Verifying Kubernetes components...
	I0429 20:25:25.493697    6560 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:25:25.494664    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:25.504657    6560 addons.go:69] Setting storage-provisioner=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:69] Setting default-storageclass=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:234] Setting addon storage-provisioner=true in "multinode-515700"
	I0429 20:25:25.504657    6560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-515700"
	I0429 20:25:25.504657    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.520673    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:25:25.944109    6560 command_runner.go:130] > apiVersion: v1
	I0429 20:25:25.944267    6560 command_runner.go:130] > data:
	I0429 20:25:25.944267    6560 command_runner.go:130] >   Corefile: |
	I0429 20:25:25.944367    6560 command_runner.go:130] >     .:53 {
	I0429 20:25:25.944367    6560 command_runner.go:130] >         errors
	I0429 20:25:25.944367    6560 command_runner.go:130] >         health {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            lameduck 5s
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         ready
	I0429 20:25:25.944367    6560 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            pods insecure
	I0429 20:25:25.944367    6560 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0429 20:25:25.944367    6560 command_runner.go:130] >            ttl 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         prometheus :9153
	I0429 20:25:25.944367    6560 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            max_concurrent 1000
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         cache 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loop
	I0429 20:25:25.944367    6560 command_runner.go:130] >         reload
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loadbalance
	I0429 20:25:25.944367    6560 command_runner.go:130] >     }
	I0429 20:25:25.944367    6560 command_runner.go:130] > kind: ConfigMap
	I0429 20:25:25.944367    6560 command_runner.go:130] > metadata:
	I0429 20:25:25.944367    6560 command_runner.go:130] >   creationTimestamp: "2024-04-29T20:25:11Z"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   name: coredns
	I0429 20:25:25.944367    6560 command_runner.go:130] >   namespace: kube-system
	I0429 20:25:25.944367    6560 command_runner.go:130] >   resourceVersion: "265"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   uid: af2c186a-a14a-4671-8545-05c5ef5d4a89
	I0429 20:25:25.949389    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 20:25:26.023682    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:25:26.408680    6560 command_runner.go:130] > configmap/coredns replaced
	I0429 20:25:26.414254    6560 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.417677    6560 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 20:25:26.417677    6560 node_ready.go:35] waiting up to 6m0s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.435291    6560 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.437034    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.438430    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Audit-Id: a2ae57e5-53a3-4342-ad5c-c2149e87ef04
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438430    6560 round_trippers.go:580]     Audit-Id: 2e6b22a8-9874-417c-a6a5-f7b7437121f7
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438909    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.439086    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:26.440203    6560 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.440298    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.440406    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.440406    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.440519    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:26.440519    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.459913    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.459962    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Audit-Id: 9ca07d91-957f-4992-9642-97b01e07dde3
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.459962    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"393","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.918339    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.918339    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918339    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918339    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.918300    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.918498    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918580    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918580    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Audit-Id: 70383541-35df-461a-b4fb-41bd3b56f11d
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Audit-Id: e628428d-1384-4709-a32e-084c9dfec614
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.929164    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"404","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.929400    6560 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-515700" context rescaled to 1 replicas
	I0429 20:25:26.929400    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.426913    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.426913    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.426913    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.426913    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.430795    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:27.430795    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Audit-Id: e4e6b2b1-e008-4f2a-bae4-3596fce97666
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.430996    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.431340    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.789217    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.789348    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.792426    6560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:25:27.791141    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:27.795103    6560 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:27.795205    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:25:27.795205    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.795205    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:27.795924    6560 addons.go:234] Setting addon default-storageclass=true in "multinode-515700"
	I0429 20:25:27.795924    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:27.796802    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.922993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.923088    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.923175    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.923175    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.929435    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:27.929435    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.929545    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.929545    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.929638    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Audit-Id: 8ef77f9f-d18f-4fa7-ab77-85c137602c84
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.930046    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.432611    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.432611    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.432611    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.432611    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.441320    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:28.441862    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Audit-Id: d32cd9f8-494c-4a69-b028-606c7d354657
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.442308    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.442914    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:28.927674    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.927674    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.927674    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.927897    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.933213    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:28.933794    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Audit-Id: 75d40b2c-c2ed-4221-9361-88591791a649
	I0429 20:25:28.934208    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.422724    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.422898    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.422898    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.422975    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.426431    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:29.426876    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Audit-Id: dde47b6c-069b-408d-a5c6-0a2ea7439643
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.427261    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.918308    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.918308    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.918308    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.918407    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.921072    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:29.921072    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.921072    6560 round_trippers.go:580]     Audit-Id: d4643df6-68ad-4c4c-9604-a5a4d019fba1
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.922076    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.057466    6560 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:30.057636    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:25:30.057750    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:30.145026    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:30.424041    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.424310    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.424310    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.424310    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.428606    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.429051    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.429263    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Audit-Id: 2c59a467-8079-41ed-ac1d-f96dd660d343
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.429435    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.931993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.931993    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.931993    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.931993    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.936635    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.936635    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.937644    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Audit-Id: 9214de5b-8221-4c68-b6b9-a92fe7d41fd1
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.938175    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.939066    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:31.423866    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.423866    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.423866    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.423988    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.427054    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.427827    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.427827    6560 round_trippers.go:580]     Audit-Id: 5f66acb8-ef38-4220-83b6-6e87fbec6f58
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.427869    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:31.932664    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.932664    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.932761    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.932761    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.936680    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.936680    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Audit-Id: f9fb721e-ccaf-4e33-ac69-8ed840761191
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.937009    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.312723    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:32.424680    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.424953    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.424953    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.424953    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.428624    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:32.428906    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.428906    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Audit-Id: d3a39f3a-571d-46c0-a442-edf136da8a11
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.429531    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.858444    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:32.926226    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.926317    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.926393    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.926393    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.929204    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:32.929583    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.929583    6560 round_trippers.go:580]     Audit-Id: 55fc987d-65c0-4ac8-95d2-7fa4185e179b
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.929734    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.930327    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.034553    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:33.425759    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.425833    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.425833    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.425833    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.428624    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.429656    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Audit-Id: d581fce7-8906-48d7-8e13-2d1aba9dec04
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.429916    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.430438    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:33.930984    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.931053    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.931053    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.931053    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.933717    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.933717    6560 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.933717    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.933717    6560 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Audit-Id: 680ed792-db71-4b29-abb9-40f7154e8b1e
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.933717    6560 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0429 20:25:33.933717    6560 command_runner.go:130] > pod/storage-provisioner created
	I0429 20:25:33.933717    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.428102    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.428102    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.428102    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.428102    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.431722    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:34.432624    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Audit-Id: 86cc0608-3000-42b0-9ce8-4223e32d60c3
	I0429 20:25:34.432684    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.433082    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.932029    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.932316    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.932316    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.932316    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.936749    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:34.936749    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Audit-Id: 0e63a4db-3dd4-4e74-8b79-c019b6b97e89
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.937415    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:35.024893    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:35.025151    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:35.025317    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:35.170634    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:35.371184    6560 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0429 20:25:35.371418    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 20:25:35.371571    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.371571    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.371571    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.380781    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:35.381213    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Audit-Id: 31f5e265-3d38-4520-88d0-33f47325189c
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Length: 1273
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.381380    6560 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 20:25:35.382106    6560 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.382183    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 20:25:35.382183    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:35.382269    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.390758    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.390758    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.390758    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Audit-Id: 4dbb716e-2d97-4c38-b342-f63e7d38a4d0
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Length: 1220
	I0429 20:25:35.391190    6560 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.395279    6560 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 20:25:35.397530    6560 addons.go:505] duration metric: took 9.9037568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 20:25:35.421733    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.421733    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.421733    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.421733    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.452743    6560 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 20:25:35.452743    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.452743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Audit-Id: 316d0393-7ba5-4629-87cb-7ae54d0ea965
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.454477    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.455068    6560 node_ready.go:49] node "multinode-515700" has status "Ready":"True"
	I0429 20:25:35.455148    6560 node_ready.go:38] duration metric: took 9.0374019s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:35.455148    6560 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:35.455213    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:35.455213    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.455213    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.455213    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.473128    6560 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 20:25:35.473128    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Audit-Id: 81e159c0-b703-47ba-a9f3-82cc907b8705
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.475820    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"431","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0429 20:25:35.481714    6560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:35.482325    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.482379    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.482379    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.482432    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.491093    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.491093    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Audit-Id: a2eb7ca2-d415-4a7c-a1f0-1ac743bd8f82
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.492090    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.493335    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.493335    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.493335    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.493419    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.496084    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:35.496084    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.496084    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Audit-Id: f61c97ad-ee7a-4666-9244-d7d2091b5d09
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.497097    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.497131    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.497332    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.991323    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.991323    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.991323    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.991323    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.995451    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:35.995451    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Audit-Id: faa8a1a4-279f-4dc3-99c8-8c3b9e9ed746
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:35.996592    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.997239    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.997292    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.997292    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.997292    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.999987    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:35.999987    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Audit-Id: 070c7fff-f707-4b9a-9aef-031cedc68a8c
	I0429 20:25:36.000411    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.483004    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.483004    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.483004    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.483004    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.488152    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:36.488152    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.488152    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.488678    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Audit-Id: fb5cc675-b39d-4cb0-ba8c-24140b3d95e8
	I0429 20:25:36.489818    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.490926    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.490926    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.490985    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.490985    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.494654    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:36.494654    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Audit-Id: fe6d880a-4cf8-4b10-8c7f-debde123eafc
	I0429 20:25:36.495423    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.991643    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.991643    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.991643    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.991855    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.996384    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:36.996384    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Audit-Id: 933a6dd5-a0f7-4380-8189-3e378a8a620d
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:36.997332    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.997760    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.997760    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.997760    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.997760    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.000889    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.000889    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.001211    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Audit-Id: 0342e743-45a6-4fd7-97be-55a766946396
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.001274    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.001759    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.495712    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:37.495712    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.495712    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.495712    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.508671    6560 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 20:25:37.508671    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Audit-Id: d30c6154-a41b-4a0d-976f-d19f40e67223
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.508671    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0429 20:25:37.510663    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.510663    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.510663    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.510663    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.513686    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.513686    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Audit-Id: 397b83a5-95f9-4df8-a76b-042ecc96922c
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.514662    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.514662    6560 pod_ready.go:92] pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.514662    6560 pod_ready.go:81] duration metric: took 2.0329329s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-515700
	I0429 20:25:37.514662    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.514662    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.514662    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.517691    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.517691    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Audit-Id: df53f071-06ed-4797-a51b-7d01b84cac86
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.518412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-515700","namespace":"kube-system","uid":"85f2dc9a-17b5-413c-ab83-d3cbe955571e","resourceVersion":"319","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.241.25:2379","kubernetes.io/config.hash":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.mirror":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.seen":"2024-04-29T20:25:11.718525866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0429 20:25:37.519044    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.519044    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.519124    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.519124    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.521788    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.521788    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Audit-Id: ee5fdb3e-9869-4cd7-996a-a25b453aeb6b
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.521944    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.522769    6560 pod_ready.go:92] pod "etcd-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.522844    6560 pod_ready.go:81] duration metric: took 8.1819ms for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.522844    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.523015    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-515700
	I0429 20:25:37.523015    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.523079    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.523079    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.525575    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.525575    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Audit-Id: cd9d851c-f606-48c9-8da3-3d194ab5464f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.526015    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-515700","namespace":"kube-system","uid":"f5a212eb-87a9-476a-981a-9f31731f39e6","resourceVersion":"312","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.241.25:8443","kubernetes.io/config.hash":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.mirror":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.seen":"2024-04-29T20:25:11.718530866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0429 20:25:37.526356    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.526356    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.526356    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.526356    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.535954    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:37.535954    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Audit-Id: 018aa21f-d408-4777-b54c-eb7aa2295a34
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.536470    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.536974    6560 pod_ready.go:92] pod "kube-apiserver-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.537034    6560 pod_ready.go:81] duration metric: took 14.0881ms for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537034    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537183    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-515700
	I0429 20:25:37.537276    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.537297    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.537297    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.539964    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.539964    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Audit-Id: d3232756-fc07-4b33-a3b5-989d2932cec4
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.541274    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-515700","namespace":"kube-system","uid":"2c9ba563-c2af-45b7-bc1e-bf39759a339b","resourceVersion":"315","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.mirror":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.seen":"2024-04-29T20:25:11.718532066Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0429 20:25:37.541935    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.541935    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.541935    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.541935    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.555960    6560 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 20:25:37.555960    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Audit-Id: 2d117219-3b1a-47fe-99a4-7e5aea7e84d3
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.555960    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.555960    6560 pod_ready.go:92] pod "kube-controller-manager-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.555960    6560 pod_ready.go:81] duration metric: took 18.9251ms for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.555960    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.556943    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gx5x
	I0429 20:25:37.556943    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.556943    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.556943    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.559965    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.560477    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.560477    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Audit-Id: 14e6d1be-eac6-4f20-9621-b409c951fae1
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.560781    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6gx5x","generateName":"kube-proxy-","namespace":"kube-system","uid":"886ac698-7e9b-431b-b822-577331b02f41","resourceVersion":"407","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"027f1d05-009f-4199-81e9-45b0a2d3710f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"027f1d05-009f-4199-81e9-45b0a2d3710f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0429 20:25:37.561552    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.561581    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.561581    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.561581    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.567713    6560 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 20:25:37.567713    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Audit-Id: 678df177-6944-4d30-b889-62528c06bab2
	I0429 20:25:37.567713    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.568391    6560 pod_ready.go:92] pod "kube-proxy-6gx5x" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.568391    6560 pod_ready.go:81] duration metric: took 12.4313ms for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.568391    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.701559    6560 request.go:629] Waited for 132.9214ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701779    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701853    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.701853    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.701853    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.705314    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.706129    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.706129    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.706183    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Audit-Id: 4fb010ad-4d68-4aa0-9ba4-f68d04faa9e8
	I0429 20:25:37.706412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-515700","namespace":"kube-system","uid":"096d3e94-25ba-49b3-b329-6fb47fc88f25","resourceVersion":"334","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.mirror":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.seen":"2024-04-29T20:25:11.718533166Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0429 20:25:37.905204    6560 request.go:629] Waited for 197.8802ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.905322    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.905466    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.909057    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.909159    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Audit-Id: a6cecf7e-83ad-4d5f-8cbb-a65ced7e83ce
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.909286    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.909697    6560 pod_ready.go:92] pod "kube-scheduler-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.909697    6560 pod_ready.go:81] duration metric: took 341.3037ms for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.909697    6560 pod_ready.go:38] duration metric: took 2.4545299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:37.909697    6560 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:25:37.923721    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:25:37.956142    6560 command_runner.go:130] > 2047
	I0429 20:25:37.956226    6560 api_server.go:72] duration metric: took 12.462433s to wait for apiserver process to appear ...
	I0429 20:25:37.956226    6560 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:25:37.956330    6560 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:25:37.965150    6560 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:25:37.965332    6560 round_trippers.go:463] GET https://172.17.241.25:8443/version
	I0429 20:25:37.965364    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.965364    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.965364    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.967124    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:37.967124    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Audit-Id: c3b17e5f-8eb5-4422-bcd1-48cea5831311
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.967423    6560 round_trippers.go:580]     Content-Length: 263
	I0429 20:25:37.967423    6560 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 20:25:37.967530    6560 api_server.go:141] control plane version: v1.30.0
	I0429 20:25:37.967530    6560 api_server.go:131] duration metric: took 11.2306ms to wait for apiserver health ...
	I0429 20:25:37.967629    6560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:25:38.109818    6560 request.go:629] Waited for 142.1878ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110201    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110256    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.110275    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.110275    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.118070    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.118070    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Audit-Id: 557b3073-d14e-4919-8133-995d5b042d22
	I0429 20:25:38.119823    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.123197    6560 system_pods.go:59] 8 kube-system pods found
	I0429 20:25:38.123197    6560 system_pods.go:61] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.123197    6560 system_pods.go:74] duration metric: took 155.566ms to wait for pod list to return data ...
	I0429 20:25:38.123197    6560 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:25:38.295950    6560 request.go:629] Waited for 172.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.296300    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.296300    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.300424    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:38.300424    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Content-Length: 261
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Audit-Id: 7466bf5b-fa07-4a6b-bc06-274738fc9259
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.300674    6560 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"13c4332f-9236-4f04-9e46-f5a98bc3d731","resourceVersion":"343","creationTimestamp":"2024-04-29T20:25:24Z"}}]}
	I0429 20:25:38.300674    6560 default_sa.go:45] found service account: "default"
	I0429 20:25:38.300674    6560 default_sa.go:55] duration metric: took 177.4758ms for default service account to be created ...
	I0429 20:25:38.300674    6560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:25:38.498686    6560 request.go:629] Waited for 197.291ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.498782    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.499005    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.499005    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.499005    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.506756    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.507387    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Audit-Id: ffc5efdb-4263-4450-8ff2-c1bb3f979300
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.507485    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.507503    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.508809    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.512231    6560 system_pods.go:86] 8 kube-system pods found
	I0429 20:25:38.512305    6560 system_pods.go:89] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.512305    6560 system_pods.go:89] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.512451    6560 system_pods.go:89] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.512451    6560 system_pods.go:126] duration metric: took 211.7756ms to wait for k8s-apps to be running ...
	I0429 20:25:38.512451    6560 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:25:38.526027    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:25:38.555837    6560 system_svc.go:56] duration metric: took 43.3852ms WaitForService to wait for kubelet
	I0429 20:25:38.555837    6560 kubeadm.go:576] duration metric: took 13.0620394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:25:38.556007    6560 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:25:38.701455    6560 request.go:629] Waited for 145.1917ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701896    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701917    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.701917    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.702032    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.709221    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.709221    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Audit-Id: 9241b2a0-c483-4bfb-8a19-8f5c9b610b53
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.709221    6560 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0429 20:25:38.710061    6560 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:25:38.710061    6560 node_conditions.go:123] node cpu capacity is 2
	I0429 20:25:38.710061    6560 node_conditions.go:105] duration metric: took 154.0529ms to run NodePressure ...
	I0429 20:25:38.710061    6560 start.go:240] waiting for startup goroutines ...
	I0429 20:25:38.710061    6560 start.go:245] waiting for cluster config update ...
	I0429 20:25:38.710061    6560 start.go:254] writing updated cluster config ...
	I0429 20:25:38.717493    6560 out.go:177] 
	I0429 20:25:38.721129    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.735840    6560 out.go:177] * Starting "multinode-515700-m02" worker node in "multinode-515700" cluster
	I0429 20:25:38.738518    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:25:38.738518    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:25:38.738983    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:25:38.739240    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:25:38.739481    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.745029    6560 start.go:360] acquireMachinesLock for multinode-515700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:25:38.745029    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700-m02"
	I0429 20:25:38.745029    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 20:25:38.745575    6560 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 20:25:38.748852    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:25:38.748852    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:25:38.748852    6560 client.go:168] LocalClient.Create starting
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:40.746212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:25:42.605453    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:47.992432    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:47.992702    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:47.996014    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:25:48.551162    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: Creating VM...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:51.874174    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:25:51.874221    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:53.736899    6560 main.go:141] libmachine: Creating VHD
	I0429 20:25:53.737514    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D65FFD0C-285E-44D0-8723-21544BDDE71A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:25:57.529054    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:00.734035    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -SizeBytes 20000MB
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:03.314283    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-515700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700-m02 -DynamicMemoryEnabled $false
	I0429 20:26:09.480100    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700-m02 -Count 2
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:11.716979    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\boot2docker.iso'
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:14.377298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd'
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:17.090909    6560 main.go:141] libmachine: Starting VM...
	I0429 20:26:17.090909    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700-m02
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:22.527096    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:26.113296    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:28.339433    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:30.953587    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:30.953628    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:31.955478    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:34.197688    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:34.197831    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:34.197901    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:37.817016    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:42.682666    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:42.683603    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:43.685897    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:48.604877    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:48.604915    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:48.604999    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:50.794876    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:50.795093    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:50.795407    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:26:50.795407    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:52.992195    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:52.992243    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:52.992331    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:55.630552    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:55.641728    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:26:55.642758    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:26:55.769182    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:26:55.769182    6560 buildroot.go:166] provisioning hostname "multinode-515700-m02"
	I0429 20:26:55.769333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:57.942857    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:57.943721    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:57.943789    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:00.610012    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:00.610498    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:00.617342    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:00.618022    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:00.618022    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700-m02 && echo "multinode-515700-m02" | sudo tee /etc/hostname
	I0429 20:27:00.774430    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700-m02
	
	I0429 20:27:00.775391    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:02.970796    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:02.971352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:02.971577    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:05.640782    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:05.640782    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:05.640782    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:27:05.779330    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:27:05.779330    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:27:05.779435    6560 buildroot.go:174] setting up certificates
	I0429 20:27:05.779435    6560 provision.go:84] configureAuth start
	I0429 20:27:05.779531    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:07.939785    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:10.607752    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:10.608236    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:10.608319    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:15.428095    6560 provision.go:143] copyHostCerts
	I0429 20:27:15.429066    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:27:15.429066    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:27:15.429066    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:27:15.429626    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:27:15.430936    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:27:15.431366    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:27:15.431366    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:27:15.431875    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:27:15.432822    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:27:15.433064    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:27:15.433064    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:27:15.433498    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:27:15.434807    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700-m02 san=[127.0.0.1 172.17.253.145 localhost minikube multinode-515700-m02]
	I0429 20:27:15.511954    6560 provision.go:177] copyRemoteCerts
	I0429 20:27:15.527105    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:27:15.527105    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:20.368198    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:20.368587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:20.368930    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:20.467819    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9406764s)
	I0429 20:27:20.468832    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:27:20.469887    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:27:20.524889    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:27:20.525559    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 20:27:20.578020    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:27:20.578217    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:27:20.634803    6560 provision.go:87] duration metric: took 14.8552541s to configureAuth
	I0429 20:27:20.634874    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:27:20.635533    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:27:20.635638    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:22.779762    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:25.427345    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:25.427345    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:25.428345    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:27:25.562050    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:27:25.562195    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:27:25.562515    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:27:25.562592    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:30.404141    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:30.405195    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:30.412105    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:30.413171    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:30.413700    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.241.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:27:30.578477    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.241.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:27:30.578477    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:32.772580    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:35.465933    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:35.466426    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:35.466509    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:27:37.701893    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:27:37.701981    6560 machine.go:97] duration metric: took 46.9062133s to provisionDockerMachine
	I0429 20:27:37.702052    6560 client.go:171] duration metric: took 1m58.9522849s to LocalClient.Create
	I0429 20:27:37.702194    6560 start.go:167] duration metric: took 1m58.9524269s to libmachine.API.Create "multinode-515700"
	I0429 20:27:37.702194    6560 start.go:293] postStartSetup for "multinode-515700-m02" (driver="hyperv")
	I0429 20:27:37.702194    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:27:37.716028    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:27:37.716028    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:39.888498    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:39.889355    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:39.889707    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:42.576527    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:42.688245    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9721792s)
	I0429 20:27:42.703472    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:27:42.710185    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:27:42.710391    6560 command_runner.go:130] > ID=buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:27:42.710391    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:27:42.710562    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:27:42.710562    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:27:42.710640    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:27:42.712121    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:27:42.712121    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:27:42.725734    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:27:42.745571    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:27:42.798223    6560 start.go:296] duration metric: took 5.0959902s for postStartSetup
	I0429 20:27:42.801718    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:44.985225    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:47.630520    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:27:47.633045    6560 start.go:128] duration metric: took 2m8.8864784s to createHost
	I0429 20:27:47.633167    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:49.823309    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:49.823412    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:49.823495    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:52.524084    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:52.524183    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:52.530451    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:52.531204    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:52.531204    6560 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 20:27:52.658970    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422472.660345683
	
	I0429 20:27:52.659208    6560 fix.go:216] guest clock: 1714422472.660345683
	I0429 20:27:52.659208    6560 fix.go:229] Guest: 2024-04-29 20:27:52.660345683 +0000 UTC Remote: 2024-04-29 20:27:47.6330452 +0000 UTC m=+346.394263801 (delta=5.027300483s)
	I0429 20:27:52.659208    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:57.461861    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:57.461927    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:57.467747    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:57.468699    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:57.468699    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422472
	I0429 20:27:57.617018    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:27:52 UTC 2024
	
	I0429 20:27:57.617018    6560 fix.go:236] clock set: Mon Apr 29 20:27:52 UTC 2024
	 (err=<nil>)
	I0429 20:27:57.617018    6560 start.go:83] releasing machines lock for "multinode-515700-m02", held for 2m18.8709228s
	I0429 20:27:57.618122    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:59.795247    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:02.475615    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:02.475867    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:02.479078    6560 out.go:177] * Found network options:
	I0429 20:28:02.481434    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.483990    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.486147    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.488513    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 20:28:02.490094    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.492090    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:28:02.492090    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:02.504078    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:28:02.504078    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:07.440744    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.440938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.441026    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.467629    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.629032    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:28:07.630105    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1379759s)
	I0429 20:28:07.630105    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0429 20:28:07.630229    6560 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1259881s)
	W0429 20:28:07.630229    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:28:07.649597    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:28:07.685721    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:28:07.685954    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:28:07.685954    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:07.686227    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:07.722613    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:28:07.736060    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:28:07.771561    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:28:07.793500    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:28:07.809715    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:28:07.846242    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.882404    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:28:07.918280    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.956186    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:28:07.994072    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:28:08.029701    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:28:08.067417    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:28:08.104772    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:28:08.126209    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:28:08.140685    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:28:08.181598    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:08.410362    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:28:08.449856    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:08.466974    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Unit]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:28:08.492900    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:28:08.492900    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:28:08.492900    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Service]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Type=notify
	I0429 20:28:08.492900    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:28:08.492900    6560 command_runner.go:130] > Environment=NO_PROXY=172.17.241.25
	I0429 20:28:08.492900    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:28:08.492900    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:28:08.492900    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:28:08.492900    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:28:08.492900    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:28:08.492900    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:28:08.492900    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:28:08.492900    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:28:08.492900    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:28:08.493891    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:28:08.493891    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:28:08.493891    6560 command_runner.go:130] > Delegate=yes
	I0429 20:28:08.493891    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:28:08.493891    6560 command_runner.go:130] > KillMode=process
	I0429 20:28:08.493891    6560 command_runner.go:130] > [Install]
	I0429 20:28:08.493891    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:28:08.505928    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.548562    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:28:08.606977    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.652185    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.695349    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:28:08.785230    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.816602    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:08.853434    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:28:08.870019    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:28:08.876256    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:28:08.890247    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:28:08.911471    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:28:08.962890    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:28:09.201152    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:28:09.397561    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:28:09.398166    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:28:09.451159    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:09.673084    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:29:10.809648    6560 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 20:29:10.809648    6560 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 20:29:10.809648    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1361028s)
	I0429 20:29:10.827248    6560 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 20:29:10.852173    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 20:29:10.852279    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 20:29:10.852319    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 20:29:10.852739    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854100    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854118    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854142    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854175    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 20:29:10.865335    6560 out.go:177] 
	W0429 20:29:10.865335    6560 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 20:29:10.865335    6560 out.go:239] * 
	* 
	W0429 20:29:10.869400    6560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:29:10.876700    6560 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-515700 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700: (12.4804668s)
helpers_test.go:244: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25: (8.8796264s)
helpers_test.go:252: TestMultiNode/serial/FreshStart2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                   Args                    |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p image-342000                           | image-342000             | minikube6\jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:57 UTC |
	| start   | -p json-output-757400                     | json-output-757400       | testUser          | v1.33.0 | 29 Apr 24 19:57 UTC | 29 Apr 24 20:01 UTC |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	|         | --memory=2200 --wait=true                 |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| pause   | -p json-output-757400                     | json-output-757400       | testUser          | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:01 UTC |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| unpause | -p json-output-757400                     | json-output-757400       | testUser          | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:01 UTC |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| stop    | -p json-output-757400                     | json-output-757400       | testUser          | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:02 UTC |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| delete  | -p json-output-757400                     | json-output-757400       | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	| start   | -p json-output-error-203900               | json-output-error-203900 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --memory=2200 --output=json               |                          |                   |         |                     |                     |
	|         | --wait=true --driver=fail                 |                          |                   |         |                     |                     |
	| delete  | -p json-output-error-203900               | json-output-error-203900 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	| start   | -p first-247400                           | first-247400             | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:05 UTC |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| start   | -p second-247400                          | second-247400            | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:05 UTC | 29 Apr 24 20:09 UTC |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| delete  | -p second-247400                          | second-247400            | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:09 UTC | 29 Apr 24 20:10 UTC |
	| delete  | -p first-247400                           | first-247400             | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:10 UTC | 29 Apr 24 20:11 UTC |
	| start   | -p mount-start-1-089600                   | mount-start-1-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:11 UTC | 29 Apr 24 20:13 UTC |
	|         | --memory=2048 --mount                     |                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize               |                          |                   |         |                     |                     |
	|         | 6543 --mount-port 46464                   |                          |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes             |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host | mount-start-1-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:13 UTC |                     |
	|         | --profile mount-start-1-089600 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46464 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-1-089600 ssh -- ls            | mount-start-1-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:13 UTC | 29 Apr 24 20:13 UTC |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| start   | -p mount-start-2-089600                   | mount-start-2-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:13 UTC | 29 Apr 24 20:16 UTC |
	|         | --memory=2048 --mount                     |                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize               |                          |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                   |                          |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes             |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host | mount-start-2-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:16 UTC |                     |
	|         | --profile mount-start-2-089600 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-089600 ssh -- ls            | mount-start-2-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:16 UTC | 29 Apr 24 20:16 UTC |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| delete  | -p mount-start-1-089600                   | mount-start-1-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:16 UTC | 29 Apr 24 20:17 UTC |
	|         | --alsologtostderr -v=5                    |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-089600 ssh -- ls            | mount-start-2-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:17 UTC | 29 Apr 24 20:17 UTC |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| stop    | -p mount-start-2-089600                   | mount-start-2-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:17 UTC | 29 Apr 24 20:17 UTC |
	| start   | -p mount-start-2-089600                   | mount-start-2-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:17 UTC |                     |
	| delete  | -p mount-start-2-089600                   | mount-start-2-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:20 UTC | 29 Apr 24 20:21 UTC |
	| delete  | -p mount-start-1-089600                   | mount-start-1-089600     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:22 UTC | 29 Apr 24 20:22 UTC |
	| start   | -p multinode-515700                       | multinode-515700         | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:22 UTC |                     |
	|         | --wait=true --memory=2200                 |                          |                   |         |                     |                     |
	|         | --nodes=2 -v=8                            |                          |                   |         |                     |                     |
	|         | --alsologtostderr                         |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:22:01
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:22:01.431751    6560 out.go:291] Setting OutFile to fd 1000 ...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.432590    6560 out.go:304] Setting ErrFile to fd 1156...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.463325    6560 out.go:298] Setting JSON to false
	I0429 20:22:01.467738    6560 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24060,"bootTime":1714398060,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 20:22:01.467738    6560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 20:22:01.473386    6560 out.go:177] * [multinode-515700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 20:22:01.477900    6560 notify.go:220] Checking for updates...
	I0429 20:22:01.480328    6560 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:22:01.485602    6560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:22:01.488123    6560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 20:22:01.490657    6560 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:22:01.493249    6560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:22:01.496241    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:22:01.497610    6560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:22:06.930154    6560 out.go:177] * Using the hyperv driver based on user configuration
	I0429 20:22:06.933587    6560 start.go:297] selected driver: hyperv
	I0429 20:22:06.933587    6560 start.go:901] validating driver "hyperv" against <nil>
	I0429 20:22:06.933587    6560 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:22:06.986262    6560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 20:22:06.987723    6560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:22:06.988334    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:22:06.988334    6560 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 20:22:06.988334    6560 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 20:22:06.988334    6560 start.go:340] cluster config:
	{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:22:06.988334    6560 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:22:06.992867    6560 out.go:177] * Starting "multinode-515700" primary control-plane node in "multinode-515700" cluster
	I0429 20:22:06.995976    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:22:06.996499    6560 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 20:22:06.996703    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:22:06.996741    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:22:06.996741    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:22:06.996741    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:22:06.996741    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json: {Name:mkdf346f9e30a055d7c79ffb416c8ce539e0c5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:360] acquireMachinesLock for multinode-515700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700"
	I0429 20:22:06.999081    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:22:06.999081    6560 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 20:22:07.006481    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:22:07.006790    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:22:07.006790    6560 client.go:168] LocalClient.Create starting
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:22:09.217702    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:22:09.217822    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:09.217951    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:22:11.056235    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:12.618512    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:16.461966    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:22:17.019827    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: Creating VM...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:20.140355    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:22:20.140483    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:22.004896    6560 main.go:141] libmachine: Creating VHD
	I0429 20:22:22.004896    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:22:25.795387    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DA11902-3EE7-4F99-A00A-752C0686FD99
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:22:25.796445    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:25.796496    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:22:25.796702    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:22:25.814462    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:22:29.034595    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:29.035273    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:29.035337    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -SizeBytes 20000MB
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:31.671427    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-515700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:35.461856    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700 -DynamicMemoryEnabled $false
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:37.723890    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700 -Count 2
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\boot2docker.iso'
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:42.558432    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd'
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:45.265400    6560 main.go:141] libmachine: Starting VM...
	I0429 20:22:45.265400    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:50.732199    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:50.733048    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:50.733149    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:54.308058    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:56.517062    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:59.110985    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:59.111613    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:00.127675    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:02.349860    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:05.987459    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:08.224322    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:10.790333    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:10.791338    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:11.803237    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:14.061252    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:18.855659    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:23:18.855911    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:21.063683    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:23.697335    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:23.697580    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:23.703285    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:23.713965    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:23.713965    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:23:23.854760    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:23:23.854760    6560 buildroot.go:166] provisioning hostname "multinode-515700"
	I0429 20:23:23.854760    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:26.029157    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:26.029995    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:26.030093    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:28.624899    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:28.625217    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:28.625495    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700 && echo "multinode-515700" | sudo tee /etc/hostname
	I0429 20:23:28.799265    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700
	
	I0429 20:23:28.799376    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:30.924333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:33.588985    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:33.589381    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:33.589381    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:23:33.743242    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:23:33.743242    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:23:33.743242    6560 buildroot.go:174] setting up certificates
	I0429 20:23:33.743242    6560 provision.go:84] configureAuth start
	I0429 20:23:33.743939    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:35.885562    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:38.477298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:40.581307    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:43.165623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:43.165853    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:43.165933    6560 provision.go:143] copyHostCerts
	I0429 20:23:43.166093    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:23:43.166093    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:23:43.166093    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:23:43.166722    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:23:43.168141    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:23:43.168305    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:23:43.168305    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:23:43.168887    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:23:43.169614    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:23:43.170245    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:23:43.170340    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:23:43.170731    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:23:43.171712    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700 san=[127.0.0.1 172.17.241.25 localhost minikube multinode-515700]
	I0429 20:23:43.368646    6560 provision.go:177] copyRemoteCerts
	I0429 20:23:43.382882    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:23:43.382882    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:45.539057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:48.109324    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:23:48.217340    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8343588s)
	I0429 20:23:48.217478    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:23:48.218375    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:23:48.267636    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:23:48.267636    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 20:23:48.316493    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:23:48.316784    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:23:48.372851    6560 provision.go:87] duration metric: took 14.6294509s to configureAuth
	I0429 20:23:48.372952    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:23:48.373086    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:23:48.373086    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:50.522765    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:50.522998    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:50.523146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:53.169650    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:53.170462    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:53.170462    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:23:53.302673    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:23:53.302726    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:23:53.302726    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:23:53.302726    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:55.434984    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:58.060160    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:58.061082    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:58.067077    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:58.068199    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:58.068292    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:23:58.226608    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:23:58.227212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:00.358933    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:02.944293    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:02.944373    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:02.950227    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:02.950958    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:02.950958    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:24:05.224184    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:24:05.224184    6560 machine.go:97] duration metric: took 46.3681587s to provisionDockerMachine
	I0429 20:24:05.224184    6560 client.go:171] duration metric: took 1m58.2164577s to LocalClient.Create
	I0429 20:24:05.224184    6560 start.go:167] duration metric: took 1m58.2164577s to libmachine.API.Create "multinode-515700"
	I0429 20:24:05.224184    6560 start.go:293] postStartSetup for "multinode-515700" (driver="hyperv")
	I0429 20:24:05.224184    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:24:05.241199    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:24:05.241199    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:07.393879    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:09.983789    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:09.984033    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:09.984469    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:10.092254    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8510176s)
	I0429 20:24:10.107982    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:24:10.116700    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:24:10.116700    6560 command_runner.go:130] > ID=buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:24:10.116700    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:24:10.116700    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:24:10.116700    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:24:10.117268    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:24:10.118515    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:24:10.118515    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:24:10.132514    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:24:10.152888    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:24:10.201665    6560 start.go:296] duration metric: took 4.9774423s for postStartSetup
	I0429 20:24:10.204966    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:12.345708    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:12.345785    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:12.345855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:14.957675    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:24:14.960758    6560 start.go:128] duration metric: took 2m7.9606641s to createHost
	I0429 20:24:14.962026    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:17.100197    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:17.100281    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:17.100354    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:19.725196    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:19.725860    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:19.725860    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:24:19.867560    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422259.868914581
	
	I0429 20:24:19.867560    6560 fix.go:216] guest clock: 1714422259.868914581
	I0429 20:24:19.867694    6560 fix.go:229] Guest: 2024-04-29 20:24:19.868914581 +0000 UTC Remote: 2024-04-29 20:24:14.9613787 +0000 UTC m=+133.724240401 (delta=4.907535881s)
	I0429 20:24:19.867694    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:22.005967    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:24.588016    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:24.588016    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:24.588016    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422259
	I0429 20:24:24.741766    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:24:19 UTC 2024
	
	I0429 20:24:24.741837    6560 fix.go:236] clock set: Mon Apr 29 20:24:19 UTC 2024
	 (err=<nil>)
	I0429 20:24:24.741837    6560 start.go:83] releasing machines lock for "multinode-515700", held for 2m17.7427319s
	I0429 20:24:24.742129    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:26.884301    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:29.475377    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:29.476046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:29.480912    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:24:29.481639    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:29.493304    6560 ssh_runner.go:195] Run: cat /version.json
	I0429 20:24:29.493304    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:31.702922    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:34.435635    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.436190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.436258    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.480228    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.481073    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.481135    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.531424    6560 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 20:24:34.531720    6560 ssh_runner.go:235] Completed: cat /version.json: (5.0383759s)
	I0429 20:24:34.545943    6560 ssh_runner.go:195] Run: systemctl --version
	I0429 20:24:34.614256    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:24:34.615354    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1343125s)
	I0429 20:24:34.615354    6560 command_runner.go:130] > systemd 252 (252)
	I0429 20:24:34.615354    6560 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 20:24:34.630005    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:24:34.639051    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 20:24:34.639955    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:24:34.653590    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:24:34.683800    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:24:34.683903    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:24:34.683903    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:34.684139    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:34.720958    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:24:34.735137    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:24:34.769077    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:24:34.791121    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:24:34.804751    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:24:34.838781    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.871052    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:24:34.905043    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.940043    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:24:34.975295    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:24:35.009502    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:24:35.044104    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:24:35.078095    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:24:35.099570    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:24:35.114246    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:24:35.146794    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:35.365920    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:24:35.402710    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:35.417050    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Unit]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:24:35.443946    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:24:35.443946    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:24:35.443946    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Service]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Type=notify
	I0429 20:24:35.443946    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:24:35.443946    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:24:35.443946    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:24:35.443946    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:24:35.443946    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:24:35.443946    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:24:35.443946    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:24:35.443946    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:24:35.443946    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:24:35.443946    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:24:35.443946    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:24:35.443946    6560 command_runner.go:130] > Delegate=yes
	I0429 20:24:35.443946    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:24:35.443946    6560 command_runner.go:130] > KillMode=process
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Install]
	I0429 20:24:35.444947    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:24:35.457957    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.500818    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:24:35.548559    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.585869    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.622879    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:24:35.694256    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.721660    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:35.757211    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:24:35.773795    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:24:35.779277    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:24:35.793892    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:24:35.813834    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:24:35.865638    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:24:36.085117    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:24:36.291781    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:24:36.291781    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:24:36.337637    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:36.567033    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:39.106704    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5396504s)
	I0429 20:24:39.121937    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 20:24:39.164421    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.201973    6560 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 20:24:39.432817    6560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 20:24:39.648494    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:39.872471    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 20:24:39.918782    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.959078    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:40.189711    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 20:24:40.314827    6560 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 20:24:40.327765    6560 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 20:24:40.337989    6560 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 20:24:40.338077    6560 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 20:24:40.338077    6560 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Modify: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Change: 2024-04-29 20:24:40.227771386 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] >  Birth: -
	I0429 20:24:40.338228    6560 start.go:562] Will wait 60s for crictl version
	I0429 20:24:40.353543    6560 ssh_runner.go:195] Run: which crictl
	I0429 20:24:40.359551    6560 command_runner.go:130] > /usr/bin/crictl
	I0429 20:24:40.372542    6560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:24:40.422534    6560 command_runner.go:130] > Version:  0.1.0
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeName:  docker
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 20:24:40.422534    6560 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 20:24:40.433531    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.468470    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.477791    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.510922    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.518057    6560 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 20:24:40.518283    6560 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: 172.17.240.1/20
	I0429 20:24:40.538782    6560 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 20:24:40.546082    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:40.569927    6560 kubeadm.go:877] updating cluster {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:24:40.570125    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:24:40.581034    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:40.605162    6560 docker.go:685] Got preloaded images: 
	I0429 20:24:40.605162    6560 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 20:24:40.617894    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:40.637456    6560 command_runner.go:139] > {"Repositories":{}}
	I0429 20:24:40.652557    6560 ssh_runner.go:195] Run: which lz4
	I0429 20:24:40.659728    6560 command_runner.go:130] > /usr/bin/lz4
	I0429 20:24:40.659728    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 20:24:40.676390    6560 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:24:40.682600    6560 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 20:24:43.151463    6560 docker.go:649] duration metric: took 2.4917153s to copy over tarball
	I0429 20:24:43.166991    6560 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:24:51.777678    6560 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6106197s)
	I0429 20:24:51.777678    6560 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:24:51.848689    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:51.869772    6560 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0429 20:24:51.869772    6560 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 20:24:51.923721    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:52.150884    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:55.504316    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3534062s)
	I0429 20:24:55.515091    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:24:55.540357    6560 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 20:24:55.540357    6560 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:24:55.540557    6560 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 20:24:55.540557    6560 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:24:55.540557    6560 kubeadm.go:928] updating node { 172.17.241.25 8443 v1.30.0 docker true true} ...
	I0429 20:24:55.540557    6560 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-515700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.241.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:24:55.550945    6560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 20:24:55.586940    6560 command_runner.go:130] > cgroupfs
	I0429 20:24:55.587354    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:24:55.587354    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:24:55.587354    6560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:24:55.587354    6560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.241.25 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-515700 NodeName:multinode-515700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.241.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.241.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:24:55.587882    6560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.241.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-515700"
	  kubeletExtraArgs:
	    node-ip: 172.17.241.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.241.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:24:55.601173    6560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubeadm
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubectl
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubelet
	I0429 20:24:55.622022    6560 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:24:55.633924    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:24:55.654273    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 20:24:55.692289    6560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:24:55.726319    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:24:55.774801    6560 ssh_runner.go:195] Run: grep 172.17.241.25	control-plane.minikube.internal$ /etc/hosts
	I0429 20:24:55.781653    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.241.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:55.820570    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:56.051044    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:24:56.087660    6560 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700 for IP: 172.17.241.25
	I0429 20:24:56.087753    6560 certs.go:194] generating shared ca certs ...
	I0429 20:24:56.087824    6560 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 20:24:56.089063    6560 certs.go:256] generating profile certs ...
	I0429 20:24:56.089855    6560 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key
	I0429 20:24:56.089855    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt with IP's: []
	I0429 20:24:56.283640    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt ...
	I0429 20:24:56.284633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt: {Name:mk1286f657dae134d1e4806ec4fc1d780c02f0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.285633    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key ...
	I0429 20:24:56.285633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key: {Name:mka98d4501f3f942abed1092b1c97c4a2bbd30cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.286633    6560 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d
	I0429 20:24:56.287300    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.241.25]
	I0429 20:24:56.456862    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d ...
	I0429 20:24:56.456862    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d: {Name:mk09d828589f59d94791e90fc999c9ce1101118e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d ...
	I0429 20:24:56.458476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d: {Name:mk92ebf0409a99e4a3e3430ff86080f164f4bc0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458796    6560 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt
	I0429 20:24:56.473961    6560 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key
	I0429 20:24:56.474965    6560 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key
	I0429 20:24:56.474965    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt with IP's: []
	I0429 20:24:56.680472    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt ...
	I0429 20:24:56.680472    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt: {Name:mkc600562c7738e3eec9de4025428e3048df463a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.682476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key ...
	I0429 20:24:56.682476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key: {Name:mkc9ba6e1afbc9ca05ce8802b568a72bfd19a90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 20:24:56.685482    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 20:24:56.693323    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 20:24:56.701358    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 20:24:56.702409    6560 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 20:24:56.702718    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 20:24:56.702843    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 20:24:56.705315    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:24:56.758912    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 20:24:56.809584    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:24:56.860874    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:24:56.918708    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:24:56.969377    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:24:57.018903    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:24:57.070438    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:24:57.119823    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:24:57.168671    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 20:24:57.216697    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 20:24:57.263605    6560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:24:57.314590    6560 ssh_runner.go:195] Run: openssl version
	I0429 20:24:57.325614    6560 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 20:24:57.340061    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:24:57.374639    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.394971    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.404667    6560 command_runner.go:130] > b5213941
	I0429 20:24:57.419162    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:24:57.454540    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 20:24:57.494441    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.501867    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.502224    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.517134    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.527174    6560 command_runner.go:130] > 51391683
	I0429 20:24:57.544472    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 20:24:57.579789    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 20:24:57.613535    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622605    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622696    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.637764    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.649176    6560 command_runner.go:130] > 3ec20f2e
	I0429 20:24:57.665410    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:24:57.708796    6560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:24:57.716466    6560 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717133    6560 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717510    6560 kubeadm.go:391] StartCluster: {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:24:57.729105    6560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 20:24:57.771112    6560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 20:24:57.792952    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0429 20:24:57.807601    6560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:24:57.837965    6560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 20:24:57.856820    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:156] found existing configuration files:
	
	I0429 20:24:57.872870    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:24:57.892109    6560 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.892549    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.905782    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:24:57.939062    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:24:57.957607    6560 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.957753    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.972479    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:24:58.006849    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.025918    6560 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.025918    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.039054    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.072026    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:24:58.092314    6560 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.092673    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.105776    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:24:58.124274    6560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:24:58.562957    6560 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:24:58.562957    6560 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:25:12.186137    6560 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186137    6560 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186277    6560 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 20:25:12.186320    6560 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:25:12.186516    6560 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.187085    6560 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.187131    6560 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.190071    6560 out.go:204]   - Generating certificates and keys ...
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190667    6560 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.191251    6560 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191251    6560 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 20:25:12.193040    6560 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193086    6560 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193701    6560 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193701    6560 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.198949    6560 out.go:204]   - Booting up control plane ...
	I0429 20:25:12.193843    6560 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199855    6560 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:25:12.200494    6560 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200494    6560 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201207    6560 command_runner.go:130] > [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.202201    6560 command_runner.go:130] > [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 out.go:204]   - Configuring RBAC rules ...
	I0429 20:25:12.202201    6560 command_runner.go:130] > [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.204361    6560 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.206433    6560 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206433    6560 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206983    6560 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 kubeadm.go:309] 
	I0429 20:25:12.207142    6560 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 kubeadm.go:309] 
	I0429 20:25:12.207365    6560 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207404    6560 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207464    6560 kubeadm.go:309] 
	I0429 20:25:12.207514    6560 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0429 20:25:12.207589    6560 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:25:12.207764    6560 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.207807    6560 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.208030    6560 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 kubeadm.go:309] 
	I0429 20:25:12.208230    6560 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208230    6560 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208281    6560 kubeadm.go:309] 
	I0429 20:25:12.208375    6560 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208375    6560 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208442    6560 kubeadm.go:309] 
	I0429 20:25:12.208643    6560 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0429 20:25:12.208733    6560 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:25:12.208874    6560 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.208936    6560 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.209014    6560 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209014    6560 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209735    6560 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209735    6560 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--control-plane 
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--control-plane 
	I0429 20:25:12.210277    6560 kubeadm.go:309] 
	I0429 20:25:12.210538    6560 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] 
	I0429 20:25:12.210726    6560 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210726    6560 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210937    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:25:12.211197    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:25:12.215717    6560 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 20:25:12.234164    6560 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 20:25:12.242817    6560 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: 2024-04-29 20:23:14.801002600 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Change: 2024-04-29 20:23:06.257000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] >  Birth: -
	I0429 20:25:12.242817    6560 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 20:25:12.242817    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 20:25:12.301387    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 20:25:13.060621    6560 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > serviceaccount/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > daemonset.apps/kindnet created
	I0429 20:25:13.060707    6560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-515700 minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=multinode-515700 minikube.k8s.io/primary=true
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.092072    6560 command_runner.go:130] > -16
	I0429 20:25:13.092113    6560 ops.go:34] apiserver oom_adj: -16
	I0429 20:25:13.290753    6560 command_runner.go:130] > node/multinode-515700 labeled
	I0429 20:25:13.292700    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0429 20:25:13.306335    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.426974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:13.819653    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.947766    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.320587    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.442246    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.822864    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.943107    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.309117    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.432718    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.814070    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.933861    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.317878    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.440680    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.819594    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.942387    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.322995    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.435199    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.809136    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.932465    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.308164    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.429047    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.808817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.928476    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.310090    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.432479    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.815590    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.929079    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.321723    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.442512    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.819466    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.933742    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.309314    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.424974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.811819    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.952603    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.316794    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.432125    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.808890    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.925838    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.310021    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.434432    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.819369    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.948876    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.307817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.457947    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.818980    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.932003    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:25.308659    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:25.488149    6560 command_runner.go:130] > NAME      SECRETS   AGE
	I0429 20:25:25.488217    6560 command_runner.go:130] > default   0         1s
	I0429 20:25:25.489686    6560 kubeadm.go:1107] duration metric: took 12.4288824s to wait for elevateKubeSystemPrivileges
	W0429 20:25:25.489686    6560 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:25:25.489686    6560 kubeadm.go:393] duration metric: took 27.7719601s to StartCluster
	I0429 20:25:25.490694    6560 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.490694    6560 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:25.491677    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.493697    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 20:25:25.493697    6560 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:25:25.498680    6560 out.go:177] * Verifying Kubernetes components...
	I0429 20:25:25.493697    6560 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:25:25.494664    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:25.504657    6560 addons.go:69] Setting storage-provisioner=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:69] Setting default-storageclass=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:234] Setting addon storage-provisioner=true in "multinode-515700"
	I0429 20:25:25.504657    6560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-515700"
	I0429 20:25:25.504657    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.520673    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:25:25.944109    6560 command_runner.go:130] > apiVersion: v1
	I0429 20:25:25.944267    6560 command_runner.go:130] > data:
	I0429 20:25:25.944267    6560 command_runner.go:130] >   Corefile: |
	I0429 20:25:25.944367    6560 command_runner.go:130] >     .:53 {
	I0429 20:25:25.944367    6560 command_runner.go:130] >         errors
	I0429 20:25:25.944367    6560 command_runner.go:130] >         health {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            lameduck 5s
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         ready
	I0429 20:25:25.944367    6560 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            pods insecure
	I0429 20:25:25.944367    6560 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0429 20:25:25.944367    6560 command_runner.go:130] >            ttl 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         prometheus :9153
	I0429 20:25:25.944367    6560 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            max_concurrent 1000
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         cache 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loop
	I0429 20:25:25.944367    6560 command_runner.go:130] >         reload
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loadbalance
	I0429 20:25:25.944367    6560 command_runner.go:130] >     }
	I0429 20:25:25.944367    6560 command_runner.go:130] > kind: ConfigMap
	I0429 20:25:25.944367    6560 command_runner.go:130] > metadata:
	I0429 20:25:25.944367    6560 command_runner.go:130] >   creationTimestamp: "2024-04-29T20:25:11Z"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   name: coredns
	I0429 20:25:25.944367    6560 command_runner.go:130] >   namespace: kube-system
	I0429 20:25:25.944367    6560 command_runner.go:130] >   resourceVersion: "265"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   uid: af2c186a-a14a-4671-8545-05c5ef5d4a89
	I0429 20:25:25.949389    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 20:25:26.023682    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:25:26.408680    6560 command_runner.go:130] > configmap/coredns replaced
	I0429 20:25:26.414254    6560 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.417677    6560 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 20:25:26.417677    6560 node_ready.go:35] waiting up to 6m0s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.435291    6560 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.437034    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.438430    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Audit-Id: a2ae57e5-53a3-4342-ad5c-c2149e87ef04
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438430    6560 round_trippers.go:580]     Audit-Id: 2e6b22a8-9874-417c-a6a5-f7b7437121f7
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438909    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.439086    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:26.440203    6560 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.440298    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.440406    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.440406    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.440519    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:26.440519    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.459913    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.459962    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Audit-Id: 9ca07d91-957f-4992-9642-97b01e07dde3
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.459962    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"393","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.918339    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.918339    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918339    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918339    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.918300    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.918498    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918580    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918580    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Audit-Id: 70383541-35df-461a-b4fb-41bd3b56f11d
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Audit-Id: e628428d-1384-4709-a32e-084c9dfec614
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.929164    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"404","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.929400    6560 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-515700" context rescaled to 1 replicas
	I0429 20:25:26.929400    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.426913    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.426913    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.426913    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.426913    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.430795    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:27.430795    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Audit-Id: e4e6b2b1-e008-4f2a-bae4-3596fce97666
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.430996    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.431340    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.789217    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.789348    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.792426    6560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:25:27.791141    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:27.795103    6560 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:27.795205    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:25:27.795205    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.795205    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:27.795924    6560 addons.go:234] Setting addon default-storageclass=true in "multinode-515700"
	I0429 20:25:27.795924    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:27.796802    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.922993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.923088    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.923175    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.923175    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.929435    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:27.929435    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.929545    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.929545    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.929638    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Audit-Id: 8ef77f9f-d18f-4fa7-ab77-85c137602c84
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.930046    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.432611    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.432611    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.432611    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.432611    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.441320    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:28.441862    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Audit-Id: d32cd9f8-494c-4a69-b028-606c7d354657
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.442308    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.442914    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:28.927674    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.927674    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.927674    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.927897    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.933213    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:28.933794    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Audit-Id: 75d40b2c-c2ed-4221-9361-88591791a649
	I0429 20:25:28.934208    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.422724    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.422898    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.422898    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.422975    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.426431    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:29.426876    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Audit-Id: dde47b6c-069b-408d-a5c6-0a2ea7439643
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.427261    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.918308    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.918308    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.918308    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.918407    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.921072    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:29.921072    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.921072    6560 round_trippers.go:580]     Audit-Id: d4643df6-68ad-4c4c-9604-a5a4d019fba1
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.922076    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.057466    6560 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:30.057636    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:25:30.057750    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:30.145026    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:30.424041    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.424310    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.424310    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.424310    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.428606    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.429051    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.429263    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Audit-Id: 2c59a467-8079-41ed-ac1d-f96dd660d343
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.429435    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.931993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.931993    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.931993    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.931993    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.936635    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.936635    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.937644    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Audit-Id: 9214de5b-8221-4c68-b6b9-a92fe7d41fd1
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.938175    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.939066    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:31.423866    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.423866    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.423866    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.423988    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.427054    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.427827    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.427827    6560 round_trippers.go:580]     Audit-Id: 5f66acb8-ef38-4220-83b6-6e87fbec6f58
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.427869    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:31.932664    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.932664    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.932761    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.932761    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.936680    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.936680    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Audit-Id: f9fb721e-ccaf-4e33-ac69-8ed840761191
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.937009    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.312723    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:32.424680    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.424953    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.424953    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.424953    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.428624    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:32.428906    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.428906    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Audit-Id: d3a39f3a-571d-46c0-a442-edf136da8a11
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.429531    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.858444    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:32.926226    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.926317    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.926393    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.926393    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.929204    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:32.929583    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.929583    6560 round_trippers.go:580]     Audit-Id: 55fc987d-65c0-4ac8-95d2-7fa4185e179b
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.929734    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.930327    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.034553    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:33.425759    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.425833    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.425833    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.425833    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.428624    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.429656    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Audit-Id: d581fce7-8906-48d7-8e13-2d1aba9dec04
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.429916    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.430438    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:33.930984    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.931053    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.931053    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.931053    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.933717    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.933717    6560 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.933717    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.933717    6560 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Audit-Id: 680ed792-db71-4b29-abb9-40f7154e8b1e
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.933717    6560 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0429 20:25:33.933717    6560 command_runner.go:130] > pod/storage-provisioner created
	I0429 20:25:33.933717    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.428102    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.428102    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.428102    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.428102    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.431722    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:34.432624    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Audit-Id: 86cc0608-3000-42b0-9ce8-4223e32d60c3
	I0429 20:25:34.432684    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.433082    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.932029    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.932316    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.932316    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.932316    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.936749    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:34.936749    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Audit-Id: 0e63a4db-3dd4-4e74-8b79-c019b6b97e89
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.937415    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:35.024893    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:35.025151    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:35.025317    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:35.170634    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:35.371184    6560 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0429 20:25:35.371418    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 20:25:35.371571    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.371571    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.371571    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.380781    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:35.381213    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Audit-Id: 31f5e265-3d38-4520-88d0-33f47325189c
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Length: 1273
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.381380    6560 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 20:25:35.382106    6560 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.382183    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 20:25:35.382183    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:35.382269    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.390758    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.390758    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.390758    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Audit-Id: 4dbb716e-2d97-4c38-b342-f63e7d38a4d0
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Length: 1220
	I0429 20:25:35.391190    6560 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.395279    6560 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 20:25:35.397530    6560 addons.go:505] duration metric: took 9.9037568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 20:25:35.421733    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.421733    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.421733    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.421733    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.452743    6560 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 20:25:35.452743    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.452743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Audit-Id: 316d0393-7ba5-4629-87cb-7ae54d0ea965
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.454477    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.455068    6560 node_ready.go:49] node "multinode-515700" has status "Ready":"True"
	I0429 20:25:35.455148    6560 node_ready.go:38] duration metric: took 9.0374019s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:35.455148    6560 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:35.455213    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:35.455213    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.455213    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.455213    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.473128    6560 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 20:25:35.473128    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Audit-Id: 81e159c0-b703-47ba-a9f3-82cc907b8705
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.475820    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"431","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0429 20:25:35.481714    6560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:35.482325    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.482379    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.482379    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.482432    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.491093    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.491093    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Audit-Id: a2eb7ca2-d415-4a7c-a1f0-1ac743bd8f82
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.492090    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.493335    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.493335    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.493335    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.493419    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.496084    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:35.496084    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.496084    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Audit-Id: f61c97ad-ee7a-4666-9244-d7d2091b5d09
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.497097    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.497131    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.497332    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.991323    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.991323    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.991323    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.991323    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.995451    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:35.995451    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Audit-Id: faa8a1a4-279f-4dc3-99c8-8c3b9e9ed746
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:35.996592    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.997239    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.997292    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.997292    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.997292    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.999987    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:35.999987    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Audit-Id: 070c7fff-f707-4b9a-9aef-031cedc68a8c
	I0429 20:25:36.000411    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.483004    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.483004    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.483004    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.483004    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.488152    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:36.488152    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.488152    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.488678    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Audit-Id: fb5cc675-b39d-4cb0-ba8c-24140b3d95e8
	I0429 20:25:36.489818    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.490926    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.490926    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.490985    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.490985    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.494654    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:36.494654    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Audit-Id: fe6d880a-4cf8-4b10-8c7f-debde123eafc
	I0429 20:25:36.495423    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.991643    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.991643    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.991643    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.991855    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.996384    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:36.996384    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Audit-Id: 933a6dd5-a0f7-4380-8189-3e378a8a620d
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:36.997332    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.997760    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.997760    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.997760    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.997760    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.000889    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.000889    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.001211    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Audit-Id: 0342e743-45a6-4fd7-97be-55a766946396
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.001274    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.001759    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.495712    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:37.495712    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.495712    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.495712    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.508671    6560 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 20:25:37.508671    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Audit-Id: d30c6154-a41b-4a0d-976f-d19f40e67223
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.508671    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0429 20:25:37.510663    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.510663    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.510663    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.510663    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.513686    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.513686    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Audit-Id: 397b83a5-95f9-4df8-a76b-042ecc96922c
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.514662    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.514662    6560 pod_ready.go:92] pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.514662    6560 pod_ready.go:81] duration metric: took 2.0329329s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-515700
	I0429 20:25:37.514662    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.514662    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.514662    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.517691    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.517691    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Audit-Id: df53f071-06ed-4797-a51b-7d01b84cac86
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.518412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-515700","namespace":"kube-system","uid":"85f2dc9a-17b5-413c-ab83-d3cbe955571e","resourceVersion":"319","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.241.25:2379","kubernetes.io/config.hash":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.mirror":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.seen":"2024-04-29T20:25:11.718525866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0429 20:25:37.519044    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.519044    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.519124    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.519124    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.521788    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.521788    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Audit-Id: ee5fdb3e-9869-4cd7-996a-a25b453aeb6b
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.521944    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.522769    6560 pod_ready.go:92] pod "etcd-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.522844    6560 pod_ready.go:81] duration metric: took 8.1819ms for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.522844    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.523015    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-515700
	I0429 20:25:37.523015    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.523079    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.523079    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.525575    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.525575    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Audit-Id: cd9d851c-f606-48c9-8da3-3d194ab5464f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.526015    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-515700","namespace":"kube-system","uid":"f5a212eb-87a9-476a-981a-9f31731f39e6","resourceVersion":"312","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.241.25:8443","kubernetes.io/config.hash":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.mirror":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.seen":"2024-04-29T20:25:11.718530866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0429 20:25:37.526356    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.526356    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.526356    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.526356    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.535954    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:37.535954    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Audit-Id: 018aa21f-d408-4777-b54c-eb7aa2295a34
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.536470    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.536974    6560 pod_ready.go:92] pod "kube-apiserver-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.537034    6560 pod_ready.go:81] duration metric: took 14.0881ms for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537034    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537183    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-515700
	I0429 20:25:37.537276    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.537297    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.537297    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.539964    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.539964    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Audit-Id: d3232756-fc07-4b33-a3b5-989d2932cec4
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.541274    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-515700","namespace":"kube-system","uid":"2c9ba563-c2af-45b7-bc1e-bf39759a339b","resourceVersion":"315","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.mirror":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.seen":"2024-04-29T20:25:11.718532066Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0429 20:25:37.541935    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.541935    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.541935    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.541935    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.555960    6560 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 20:25:37.555960    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Audit-Id: 2d117219-3b1a-47fe-99a4-7e5aea7e84d3
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.555960    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.555960    6560 pod_ready.go:92] pod "kube-controller-manager-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.555960    6560 pod_ready.go:81] duration metric: took 18.9251ms for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.555960    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.556943    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gx5x
	I0429 20:25:37.556943    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.556943    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.556943    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.559965    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.560477    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.560477    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Audit-Id: 14e6d1be-eac6-4f20-9621-b409c951fae1
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.560781    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6gx5x","generateName":"kube-proxy-","namespace":"kube-system","uid":"886ac698-7e9b-431b-b822-577331b02f41","resourceVersion":"407","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"027f1d05-009f-4199-81e9-45b0a2d3710f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"027f1d05-009f-4199-81e9-45b0a2d3710f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0429 20:25:37.561552    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.561581    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.561581    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.561581    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.567713    6560 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 20:25:37.567713    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Audit-Id: 678df177-6944-4d30-b889-62528c06bab2
	I0429 20:25:37.567713    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.568391    6560 pod_ready.go:92] pod "kube-proxy-6gx5x" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.568391    6560 pod_ready.go:81] duration metric: took 12.4313ms for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.568391    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.701559    6560 request.go:629] Waited for 132.9214ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701779    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701853    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.701853    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.701853    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.705314    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.706129    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.706129    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.706183    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Audit-Id: 4fb010ad-4d68-4aa0-9ba4-f68d04faa9e8
	I0429 20:25:37.706412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-515700","namespace":"kube-system","uid":"096d3e94-25ba-49b3-b329-6fb47fc88f25","resourceVersion":"334","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.mirror":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.seen":"2024-04-29T20:25:11.718533166Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0429 20:25:37.905204    6560 request.go:629] Waited for 197.8802ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.905322    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.905466    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.909057    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.909159    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Audit-Id: a6cecf7e-83ad-4d5f-8cbb-a65ced7e83ce
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.909286    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.909697    6560 pod_ready.go:92] pod "kube-scheduler-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.909697    6560 pod_ready.go:81] duration metric: took 341.3037ms for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.909697    6560 pod_ready.go:38] duration metric: took 2.4545299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:37.909697    6560 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:25:37.923721    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:25:37.956142    6560 command_runner.go:130] > 2047
	I0429 20:25:37.956226    6560 api_server.go:72] duration metric: took 12.462433s to wait for apiserver process to appear ...
	I0429 20:25:37.956226    6560 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:25:37.956330    6560 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:25:37.965150    6560 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:25:37.965332    6560 round_trippers.go:463] GET https://172.17.241.25:8443/version
	I0429 20:25:37.965364    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.965364    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.965364    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.967124    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:37.967124    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Audit-Id: c3b17e5f-8eb5-4422-bcd1-48cea5831311
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.967423    6560 round_trippers.go:580]     Content-Length: 263
	I0429 20:25:37.967423    6560 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 20:25:37.967530    6560 api_server.go:141] control plane version: v1.30.0
	I0429 20:25:37.967530    6560 api_server.go:131] duration metric: took 11.2306ms to wait for apiserver health ...
	I0429 20:25:37.967629    6560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:25:38.109818    6560 request.go:629] Waited for 142.1878ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110201    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110256    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.110275    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.110275    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.118070    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.118070    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Audit-Id: 557b3073-d14e-4919-8133-995d5b042d22
	I0429 20:25:38.119823    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.123197    6560 system_pods.go:59] 8 kube-system pods found
	I0429 20:25:38.123197    6560 system_pods.go:61] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.123197    6560 system_pods.go:74] duration metric: took 155.566ms to wait for pod list to return data ...
	I0429 20:25:38.123197    6560 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:25:38.295950    6560 request.go:629] Waited for 172.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.296300    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.296300    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.300424    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:38.300424    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Content-Length: 261
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Audit-Id: 7466bf5b-fa07-4a6b-bc06-274738fc9259
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.300674    6560 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"13c4332f-9236-4f04-9e46-f5a98bc3d731","resourceVersion":"343","creationTimestamp":"2024-04-29T20:25:24Z"}}]}
	I0429 20:25:38.300674    6560 default_sa.go:45] found service account: "default"
	I0429 20:25:38.300674    6560 default_sa.go:55] duration metric: took 177.4758ms for default service account to be created ...
	I0429 20:25:38.300674    6560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:25:38.498686    6560 request.go:629] Waited for 197.291ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.498782    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.499005    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.499005    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.499005    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.506756    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.507387    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Audit-Id: ffc5efdb-4263-4450-8ff2-c1bb3f979300
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.507485    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.507503    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.508809    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.512231    6560 system_pods.go:86] 8 kube-system pods found
	I0429 20:25:38.512305    6560 system_pods.go:89] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.512305    6560 system_pods.go:89] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.512451    6560 system_pods.go:89] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.512451    6560 system_pods.go:126] duration metric: took 211.7756ms to wait for k8s-apps to be running ...
	I0429 20:25:38.512451    6560 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:25:38.526027    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:25:38.555837    6560 system_svc.go:56] duration metric: took 43.3852ms WaitForService to wait for kubelet
	I0429 20:25:38.555837    6560 kubeadm.go:576] duration metric: took 13.0620394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:25:38.556007    6560 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:25:38.701455    6560 request.go:629] Waited for 145.1917ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701896    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701917    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.701917    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.702032    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.709221    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.709221    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Audit-Id: 9241b2a0-c483-4bfb-8a19-8f5c9b610b53
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.709221    6560 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0429 20:25:38.710061    6560 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:25:38.710061    6560 node_conditions.go:123] node cpu capacity is 2
	I0429 20:25:38.710061    6560 node_conditions.go:105] duration metric: took 154.0529ms to run NodePressure ...
	I0429 20:25:38.710061    6560 start.go:240] waiting for startup goroutines ...
	I0429 20:25:38.710061    6560 start.go:245] waiting for cluster config update ...
	I0429 20:25:38.710061    6560 start.go:254] writing updated cluster config ...
	I0429 20:25:38.717493    6560 out.go:177] 
	I0429 20:25:38.721129    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.735840    6560 out.go:177] * Starting "multinode-515700-m02" worker node in "multinode-515700" cluster
	I0429 20:25:38.738518    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:25:38.738518    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:25:38.738983    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:25:38.739240    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:25:38.739481    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.745029    6560 start.go:360] acquireMachinesLock for multinode-515700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:25:38.745029    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700-m02"
	I0429 20:25:38.745029    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 20:25:38.745575    6560 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 20:25:38.748852    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:25:38.748852    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:25:38.748852    6560 client.go:168] LocalClient.Create starting
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:40.746212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:25:42.605453    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:47.992432    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:47.992702    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:47.996014    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:25:48.551162    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: Creating VM...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:51.874174    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:25:51.874221    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:53.736899    6560 main.go:141] libmachine: Creating VHD
	I0429 20:25:53.737514    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D65FFD0C-285E-44D0-8723-21544BDDE71A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:25:57.529054    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:00.734035    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -SizeBytes 20000MB
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:03.314283    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-515700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700-m02 -DynamicMemoryEnabled $false
	I0429 20:26:09.480100    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700-m02 -Count 2
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:11.716979    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\boot2docker.iso'
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:14.377298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd'
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:17.090909    6560 main.go:141] libmachine: Starting VM...
	I0429 20:26:17.090909    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700-m02
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:22.527096    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:26.113296    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:28.339433    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:30.953587    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:30.953628    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:31.955478    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:34.197688    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:34.197831    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:34.197901    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:37.817016    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:42.682666    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:42.683603    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:43.685897    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:48.604877    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:48.604915    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:48.604999    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:50.794876    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:50.795093    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:50.795407    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:26:50.795407    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:52.992195    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:52.992243    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:52.992331    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:55.630552    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:55.641728    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:26:55.642758    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:26:55.769182    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:26:55.769182    6560 buildroot.go:166] provisioning hostname "multinode-515700-m02"
	I0429 20:26:55.769333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:57.942857    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:57.943721    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:57.943789    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:00.610012    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:00.610498    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:00.617342    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:00.618022    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:00.618022    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700-m02 && echo "multinode-515700-m02" | sudo tee /etc/hostname
	I0429 20:27:00.774430    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700-m02
	
	I0429 20:27:00.775391    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:02.970796    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:02.971352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:02.971577    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:05.640782    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:05.640782    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:05.640782    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:27:05.779330    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:27:05.779330    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:27:05.779435    6560 buildroot.go:174] setting up certificates
	I0429 20:27:05.779435    6560 provision.go:84] configureAuth start
	I0429 20:27:05.779531    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:07.939785    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:10.607752    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:10.608236    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:10.608319    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:15.428095    6560 provision.go:143] copyHostCerts
	I0429 20:27:15.429066    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:27:15.429066    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:27:15.429066    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:27:15.429626    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:27:15.430936    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:27:15.431366    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:27:15.431366    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:27:15.431875    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:27:15.432822    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:27:15.433064    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:27:15.433064    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:27:15.433498    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:27:15.434807    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700-m02 san=[127.0.0.1 172.17.253.145 localhost minikube multinode-515700-m02]
	I0429 20:27:15.511954    6560 provision.go:177] copyRemoteCerts
	I0429 20:27:15.527105    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:27:15.527105    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:20.368198    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:20.368587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:20.368930    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:20.467819    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9406764s)
	I0429 20:27:20.468832    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:27:20.469887    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:27:20.524889    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:27:20.525559    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 20:27:20.578020    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:27:20.578217    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:27:20.634803    6560 provision.go:87] duration metric: took 14.8552541s to configureAuth
	I0429 20:27:20.634874    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:27:20.635533    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:27:20.635638    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:22.779762    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:25.427345    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:25.427345    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:25.428345    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:27:25.562050    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:27:25.562195    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:27:25.562515    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:27:25.562592    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:30.404141    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:30.405195    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:30.412105    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:30.413171    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:30.413700    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.241.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:27:30.578477    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.241.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:27:30.578477    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:32.772580    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:35.465933    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:35.466426    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:35.466509    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:27:37.701893    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:27:37.701981    6560 machine.go:97] duration metric: took 46.9062133s to provisionDockerMachine
	I0429 20:27:37.702052    6560 client.go:171] duration metric: took 1m58.9522849s to LocalClient.Create
	I0429 20:27:37.702194    6560 start.go:167] duration metric: took 1m58.9524269s to libmachine.API.Create "multinode-515700"
	I0429 20:27:37.702194    6560 start.go:293] postStartSetup for "multinode-515700-m02" (driver="hyperv")
	I0429 20:27:37.702194    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:27:37.716028    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:27:37.716028    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:39.888498    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:39.889355    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:39.889707    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:42.576527    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:42.688245    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9721792s)
	I0429 20:27:42.703472    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:27:42.710185    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:27:42.710391    6560 command_runner.go:130] > ID=buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:27:42.710391    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:27:42.710562    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:27:42.710562    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:27:42.710640    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:27:42.712121    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:27:42.712121    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:27:42.725734    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:27:42.745571    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:27:42.798223    6560 start.go:296] duration metric: took 5.0959902s for postStartSetup
	I0429 20:27:42.801718    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:44.985225    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:47.630520    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:27:47.633045    6560 start.go:128] duration metric: took 2m8.8864784s to createHost
	I0429 20:27:47.633167    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:49.823309    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:49.823412    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:49.823495    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:52.524084    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:52.524183    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:52.530451    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:52.531204    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:52.531204    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:27:52.658970    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422472.660345683
	
	I0429 20:27:52.659208    6560 fix.go:216] guest clock: 1714422472.660345683
	I0429 20:27:52.659208    6560 fix.go:229] Guest: 2024-04-29 20:27:52.660345683 +0000 UTC Remote: 2024-04-29 20:27:47.6330452 +0000 UTC m=+346.394263801 (delta=5.027300483s)
	I0429 20:27:52.659208    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:57.461861    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:57.461927    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:57.467747    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:57.468699    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:57.468699    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422472
	I0429 20:27:57.617018    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:27:52 UTC 2024
	
	I0429 20:27:57.617018    6560 fix.go:236] clock set: Mon Apr 29 20:27:52 UTC 2024
	 (err=<nil>)
	I0429 20:27:57.617018    6560 start.go:83] releasing machines lock for "multinode-515700-m02", held for 2m18.8709228s
	I0429 20:27:57.618122    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:59.795247    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:02.475615    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:02.475867    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:02.479078    6560 out.go:177] * Found network options:
	I0429 20:28:02.481434    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.483990    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.486147    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.488513    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 20:28:02.490094    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.492090    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:28:02.492090    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:02.504078    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:28:02.504078    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:07.440744    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.440938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.441026    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.467629    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.629032    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:28:07.630105    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1379759s)
	I0429 20:28:07.630105    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0429 20:28:07.630229    6560 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1259881s)
	W0429 20:28:07.630229    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:28:07.649597    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:28:07.685721    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:28:07.685954    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:28:07.685954    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:07.686227    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:07.722613    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:28:07.736060    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:28:07.771561    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:28:07.793500    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:28:07.809715    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:28:07.846242    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.882404    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:28:07.918280    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.956186    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:28:07.994072    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:28:08.029701    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:28:08.067417    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:28:08.104772    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:28:08.126209    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:28:08.140685    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:28:08.181598    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:08.410362    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:28:08.449856    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:08.466974    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Unit]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:28:08.492900    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:28:08.492900    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:28:08.492900    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Service]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Type=notify
	I0429 20:28:08.492900    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:28:08.492900    6560 command_runner.go:130] > Environment=NO_PROXY=172.17.241.25
	I0429 20:28:08.492900    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:28:08.492900    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:28:08.492900    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:28:08.492900    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:28:08.492900    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:28:08.492900    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:28:08.492900    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:28:08.492900    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:28:08.492900    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:28:08.493891    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:28:08.493891    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:28:08.493891    6560 command_runner.go:130] > Delegate=yes
	I0429 20:28:08.493891    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:28:08.493891    6560 command_runner.go:130] > KillMode=process
	I0429 20:28:08.493891    6560 command_runner.go:130] > [Install]
	I0429 20:28:08.493891    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:28:08.505928    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.548562    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:28:08.606977    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.652185    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.695349    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:28:08.785230    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.816602    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:08.853434    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:28:08.870019    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:28:08.876256    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:28:08.890247    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:28:08.911471    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:28:08.962890    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:28:09.201152    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:28:09.397561    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:28:09.398166    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:28:09.451159    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:09.673084    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:29:10.809648    6560 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 20:29:10.809648    6560 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 20:29:10.809648    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1361028s)
	I0429 20:29:10.827248    6560 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 20:29:10.852173    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 20:29:10.852279    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 20:29:10.852319    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 20:29:10.852739    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854100    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854118    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854142    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854175    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 20:29:10.865335    6560 out.go:177] 
	W0429 20:29:10.865335    6560 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 20:29:10.865335    6560 out.go:239] * 
	W0429 20:29:10.869400    6560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:29:10.876700    6560 out.go:177] 
	
	
	==> Docker <==
	Apr 29 20:25:26 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:26.769472818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:27 multinode-515700 cri-dockerd[1230]: time="2024-04-29T20:25:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c226cf922db168728b224fb6b9c355495ce826898492b65348fc22b04cdf160/resolv.conf as [nameserver 172.17.240.1]"
	Apr 29 20:25:33 multinode-515700 cri-dockerd[1230]: time="2024-04-29T20:25:33Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240202-8f1494ea: Status: Downloaded newer image for kindest/kindnetd:v20240202-8f1494ea"
	Apr 29 20:25:33 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:33.994038833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:25:33 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:33.995039344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:25:33 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:33.995663651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:33 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:33.995954954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.035664867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.036195573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.036336075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.036886281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.069192138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.069496641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.069586342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.069891545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 cri-dockerd[1230]: time="2024-04-29T20:25:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0274116a036cf3916525072a225736097fbb3d185b1f8865e3cdb283c6df4d56/resolv.conf as [nameserver 172.17.240.1]"
	Apr 29 20:25:36 multinode-515700 cri-dockerd[1230]: time="2024-04-29T20:25:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/73ab97e30d3d06ca5838c9688deaa9189046eac2488aebede8c192ac6661071e/resolv.conf as [nameserver 172.17.240.1]"
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.461690715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.461843314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.461859614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.462658312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.631948639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.632114139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.632143439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.632332338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	15da1b832ef20       cbb01a7bd410d                                                                              3 minutes ago       Running             coredns                   0                   73ab97e30d3d0       coredns-7db6d8ff4d-drcsj
	b26e455e6f823       6e38f40d628db                                                                              3 minutes ago       Running             storage-provisioner       0                   0274116a036cf       storage-provisioner
	11141cf0a01e5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988   3 minutes ago       Running             kindnet-cni               0                   5c226cf922db1       kindnet-lt84t
	8d116812e2fa7       a0bf559e280cf                                                                              4 minutes ago       Running             kube-proxy                0                   c4e88976a7bb5       kube-proxy-6gx5x
	9b9ad8fbed853       c42f13656d0b2                                                                              4 minutes ago       Running             kube-apiserver            0                   e1040c321d522       kube-apiserver-multinode-515700
	7748681b165fb       259c8277fcbbc                                                                              4 minutes ago       Running             kube-scheduler            0                   ab47450efbe05       kube-scheduler-multinode-515700
	01f30fac305bc       3861cfcd7c04c                                                                              4 minutes ago       Running             etcd                      0                   b5202cca492c4       etcd-multinode-515700
	c5de44f1f1066       c7aad43836fa5                                                                              4 minutes ago       Running             kube-controller-manager   0                   4ae9818227910       kube-controller-manager-multinode-515700
	
	
	==> coredns [15da1b832ef2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 658b75f59357881579d818bea4574a099ffd8bf4e34cb2d6414c381890635887b0895574e607ab48d69c0bc2657640404a00a48de79c5b96ce27f6a68e70a912
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36587 - 14172 "HINFO IN 4725538422205950284.7962128480288568612. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062354244s
	
	
	==> describe nodes <==
	Name:               multinode-515700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:29:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:25:42 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:25:42 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:25:42 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:25:42 +0000   Mon, 29 Apr 2024 20:25:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.241.25
	  Hostname:    multinode-515700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc8de88647d944658545c7ae4a702aea
	  System UUID:                68adc21b-67d2-5446-9537-0dea9fd880a0
	  Boot ID:                    9507eca5-5f1f-4862-974e-a61fb27048d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-drcsj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 etcd-multinode-515700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 kindnet-lt84t                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m7s
	  kube-system                 kube-apiserver-multinode-515700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-controller-manager-multinode-515700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-6gx5x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-scheduler-multinode-515700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  Starting                 4m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m29s (x8 over 4m29s)  kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s (x8 over 4m29s)  kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m29s (x7 over 4m29s)  kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m8s                   node-controller  Node multinode-515700 event: Registered Node multinode-515700 in Controller
	  Normal  NodeReady                3m57s                  kubelet          Node multinode-515700 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.473706] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 20:24] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.212417] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +31.830340] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.112166] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.613568] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.218400] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.259380] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.863180] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.213718] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.233297] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.301716] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.953055] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.129851] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.793087] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[Apr29 20:25] systemd-fstab-generator[1710]: Ignoring "noauto" option for root device
	[  +0.110579] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.112113] systemd-fstab-generator[2108]: Ignoring "noauto" option for root device
	[  +0.165104] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.220827] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.255309] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.248279] kauditd_printk_skb: 51 callbacks suppressed
	[Apr29 20:26] hrtimer: interrupt took 3466547 ns
	
	
	==> etcd [01f30fac305b] <==
	{"level":"info","ts":"2024-04-29T20:25:05.30806Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.241.25:2380"}
	{"level":"info","ts":"2024-04-29T20:25:05.309721Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"46980dd3bf48ce1f","initial-advertise-peer-urls":["https://172.17.241.25:2380"],"listen-peer-urls":["https://172.17.241.25:2380"],"advertise-client-urls":["https://172.17.241.25:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.241.25:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T20:25:05.31044Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T20:25:05.592632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T20:25:05.593172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T20:25:05.594369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f received MsgPreVoteResp from 46980dd3bf48ce1f at term 1"}
	{"level":"info","ts":"2024-04-29T20:25:05.594687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.594905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f received MsgVoteResp from 46980dd3bf48ce1f at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.595201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f became leader at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.595536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46980dd3bf48ce1f elected leader 46980dd3bf48ce1f at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.604545Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.611204Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"46980dd3bf48ce1f","local-member-attributes":"{Name:multinode-515700 ClientURLs:[https://172.17.241.25:2379]}","request-path":"/0/members/46980dd3bf48ce1f/attributes","cluster-id":"abc09309ccc0cb76","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T20:25:05.611653Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:25:05.620024Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.241.25:2379"}
	{"level":"info","ts":"2024-04-29T20:25:05.630573Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:25:05.63137Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T20:25:05.649307Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"abc09309ccc0cb76","local-member-id":"46980dd3bf48ce1f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.651933Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.653346Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.649239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T20:25:05.64915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T20:25:33.808305Z","caller":"traceutil/trace.go:171","msg":"trace[1613443414] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"266.125878ms","start":"2024-04-29T20:25:33.542119Z","end":"2024-04-29T20:25:33.808245Z","steps":["trace[1613443414] 'process raft request'  (duration: 265.820275ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:25:55.320778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.998939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4514"}
	{"level":"info","ts":"2024-04-29T20:25:55.320958Z","caller":"traceutil/trace.go:171","msg":"trace[1653665751] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:466; }","duration":"111.193233ms","start":"2024-04-29T20:25:55.209749Z","end":"2024-04-29T20:25:55.320942Z","steps":["trace[1653665751] 'range keys from in-memory index tree'  (duration: 110.919042ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:26:47.825608Z","caller":"traceutil/trace.go:171","msg":"trace[1666429790] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"149.644884ms","start":"2024-04-29T20:26:47.675822Z","end":"2024-04-29T20:26:47.825467Z","steps":["trace[1666429790] 'process raft request'  (duration: 149.476087ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:29:32 up 6 min,  0 users,  load average: 0.25, 0.26, 0.12
	Linux multinode-515700 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11141cf0a01e] <==
	I0429 20:27:25.417982       1 main.go:227] handling current node
	I0429 20:27:35.424473       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:27:35.424703       1 main.go:227] handling current node
	I0429 20:27:45.438553       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:27:45.438665       1 main.go:227] handling current node
	I0429 20:27:55.446700       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:27:55.446826       1 main.go:227] handling current node
	I0429 20:28:05.452823       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:28:05.452879       1 main.go:227] handling current node
	I0429 20:28:15.461173       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:28:15.461434       1 main.go:227] handling current node
	I0429 20:28:25.473013       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:28:25.473059       1 main.go:227] handling current node
	I0429 20:28:35.480337       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:28:35.480479       1 main.go:227] handling current node
	I0429 20:28:45.492073       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:28:45.492230       1 main.go:227] handling current node
	I0429 20:28:55.504149       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:28:55.504194       1 main.go:227] handling current node
	I0429 20:29:05.510146       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:29:05.510288       1 main.go:227] handling current node
	I0429 20:29:15.524781       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:29:15.524813       1 main.go:227] handling current node
	I0429 20:29:25.530119       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:29:25.530352       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9b9ad8fbed85] <==
	I0429 20:25:08.242631       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 20:25:08.242637       1 cache.go:39] Caches are synced for autoregister controller
	E0429 20:25:08.248119       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0429 20:25:08.268566       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 20:25:08.278746       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 20:25:08.278862       1 policy_source.go:224] refreshing policies
	E0429 20:25:08.294082       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 20:25:08.344166       1 controller.go:615] quota admission added evaluator for: namespaces
	E0429 20:25:08.380713       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 20:25:08.456691       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 20:25:09.052862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 20:25:09.062497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 20:25:09.063038       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 20:25:10.434046       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 20:25:10.531926       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 20:25:10.667114       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 20:25:10.682682       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.241.25]
	I0429 20:25:10.685084       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 20:25:10.705095       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 20:25:11.202529       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 20:25:11.660474       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 20:25:11.702512       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 20:25:11.739640       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 20:25:25.195544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 20:25:25.294821       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c5de44f1f106] <==
	I0429 20:25:24.492391       1 shared_informer.go:320] Caches are synced for taint
	I0429 20:25:24.492625       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 20:25:24.492992       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-515700"
	I0429 20:25:24.493195       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0429 20:25:24.492650       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 20:25:24.549051       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0429 20:25:24.561849       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 20:25:24.566483       1 shared_informer.go:320] Caches are synced for disruption
	I0429 20:25:24.590460       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 20:25:24.618362       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 20:25:24.656708       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 20:25:25.127753       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 20:25:25.137681       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 20:25:25.137746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 20:25:25.742477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="536.801912ms"
	I0429 20:25:25.820241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.613668ms"
	I0429 20:25:25.820606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.801µs"
	I0429 20:25:26.647122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.452819ms"
	I0429 20:25:26.673190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.454556ms"
	I0429 20:25:26.673366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.301µs"
	I0429 20:25:35.442523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48µs"
	I0429 20:25:35.504302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.901µs"
	I0429 20:25:37.519404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.21268ms"
	I0429 20:25:37.519516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.698µs"
	I0429 20:25:39.495810       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8d116812e2fa] <==
	I0429 20:25:27.278575       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:25:27.322396       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.241.25"]
	I0429 20:25:27.381777       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:25:27.381896       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:25:27.381924       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:25:27.389649       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:25:27.392153       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:25:27.392448       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:25:27.396161       1 config.go:192] "Starting service config controller"
	I0429 20:25:27.396372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:25:27.396564       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:25:27.396976       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:25:27.399035       1 config.go:319] "Starting node config controller"
	I0429 20:25:27.399236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:25:27.497521       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:25:27.497518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:25:27.500527       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7748681b165f] <==
	W0429 20:25:09.310708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.311983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.372121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.372287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.389043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:25:09.389975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 20:25:09.402308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.402357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.414781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:25:09.414997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:25:09.463545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:25:09.463684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:25:09.473360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 20:25:09.473524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 20:25:09.538214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 20:25:09.538587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 20:25:09.595918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.596510       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.751697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 20:25:09.752615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 20:25:09.794103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:25:09.794595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 20:25:09.800334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 20:25:09.800494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 20:25:11.092300       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:25:35 multinode-515700 kubelet[2116]: I0429 20:25:35.476423    2116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzm4l\" (UniqueName: \"kubernetes.io/projected/35a34648-701f-40b2-b391-6f400ce8ed45-kube-api-access-wzm4l\") pod \"coredns-7db6d8ff4d-drcsj\" (UID: \"35a34648-701f-40b2-b391-6f400ce8ed45\") " pod="kube-system/coredns-7db6d8ff4d-drcsj"
	Apr 29 20:25:35 multinode-515700 kubelet[2116]: I0429 20:25:35.476501    2116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35a34648-701f-40b2-b391-6f400ce8ed45-config-volume\") pod \"coredns-7db6d8ff4d-drcsj\" (UID: \"35a34648-701f-40b2-b391-6f400ce8ed45\") " pod="kube-system/coredns-7db6d8ff4d-drcsj"
	Apr 29 20:25:35 multinode-515700 kubelet[2116]: I0429 20:25:35.577427    2116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac7fbd67-6f97-4995-a9f9-64324ff5adad-tmp\") pod \"storage-provisioner\" (UID: \"ac7fbd67-6f97-4995-a9f9-64324ff5adad\") " pod="kube-system/storage-provisioner"
	Apr 29 20:25:35 multinode-515700 kubelet[2116]: I0429 20:25:35.577579    2116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9q4\" (UniqueName: \"kubernetes.io/projected/ac7fbd67-6f97-4995-a9f9-64324ff5adad-kube-api-access-pz9q4\") pod \"storage-provisioner\" (UID: \"ac7fbd67-6f97-4995-a9f9-64324ff5adad\") " pod="kube-system/storage-provisioner"
	Apr 29 20:25:37 multinode-515700 kubelet[2116]: I0429 20:25:37.467714    2116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.467696398 podStartE2EDuration="4.467696398s" podCreationTimestamp="2024-04-29 20:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 20:25:37.46742571 +0000 UTC m=+25.922580784" watchObservedRunningTime="2024-04-29 20:25:37.467696398 +0000 UTC m=+25.922851472"
	Apr 29 20:26:11 multinode-515700 kubelet[2116]: E0429 20:26:11.922751    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:26:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:26:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:26:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:26:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:27:11 multinode-515700 kubelet[2116]: E0429 20:27:11.924769    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:27:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:27:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:27:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:27:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:28:11 multinode-515700 kubelet[2116]: E0429 20:28:11.929049    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:28:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:28:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:28:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:28:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:29:11 multinode-515700 kubelet[2116]: E0429 20:29:11.925905    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:29:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:29:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:29:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:29:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [b26e455e6f82] <==
	I0429 20:25:36.743650       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 20:25:36.787682       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 20:25:36.790227       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 20:25:36.820440       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 20:25:36.822463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-515700_84e09442-fcd9-4e18-9e2f-7318e6322b1c!
	I0429 20:25:36.823363       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0dcda3dc-692f-4183-b089-a530533f9298", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-515700_84e09442-fcd9-4e18-9e2f-7318e6322b1c became leader
	I0429 20:25:36.927070       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-515700_84e09442-fcd9-4e18-9e2f-7318e6322b1c!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:29:24.630371    9340 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700: (12.4440363s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-515700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (466.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (755.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- rollout status deployment/busybox
E0429 20:30:23.998677   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 20:32:53.449537   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 20:33:10.230464   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 20:35:23.998563   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 20:38:10.223596   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- rollout status deployment/busybox: exit status 1 (10m5.1978379s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 2 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 2 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:29:48.003047   13856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:39:53.205926   12528 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:39:55.090662    8520 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:39:56.515960   14124 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:39:58.853969   14032 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:40:02.250743    2548 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:40:07.986562    4952 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:40:19.356865    8540 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
E0429 20:40:24.005647   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:40:29.896365    7844 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:40:46.280551   13268 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:41:12.251259    1624 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:41:42.173803   12664 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:524: failed to resolve pod IPs: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0429 20:41:42.173803   12664 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-2t4c2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-2t4c2 -- nslookup kubernetes.io: exit status 1 (487.7926ms)

                                                
                                                
** stderr ** 
	W0429 20:41:43.105607    8500 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-2t4c2 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:538: Pod busybox-fc5497c4f-2t4c2 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-dv5v8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-dv5v8 -- nslookup kubernetes.io: (2.1867501s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-2t4c2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-2t4c2 -- nslookup kubernetes.default: exit status 1 (448.7481ms)

                                                
                                                
** stderr ** 
	W0429 20:41:45.777106    5824 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-2t4c2 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:548: Pod busybox-fc5497c4f-2t4c2 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-dv5v8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-2t4c2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-2t4c2 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (470.7949ms)

                                                
                                                
** stderr ** 
	W0429 20:41:46.869782    9276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-2t4c2 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:556: Pod busybox-fc5497c4f-2t4c2 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-dv5v8 -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700: (12.4872385s)
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25: (8.6577234s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| stop    | -p mount-start-2-089600                           | mount-start-2-089600 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:17 UTC | 29 Apr 24 20:17 UTC |
	| start   | -p mount-start-2-089600                           | mount-start-2-089600 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:17 UTC |                     |
	| delete  | -p mount-start-2-089600                           | mount-start-2-089600 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:20 UTC | 29 Apr 24 20:21 UTC |
	| delete  | -p mount-start-1-089600                           | mount-start-1-089600 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:22 UTC | 29 Apr 24 20:22 UTC |
	| start   | -p multinode-515700                               | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:22 UTC |                     |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- apply -f                   | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:29 UTC | 29 Apr 24 20:29 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- rollout                    | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:29 UTC |                     |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700     | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:22:01
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:22:01.431751    6560 out.go:291] Setting OutFile to fd 1000 ...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.432590    6560 out.go:304] Setting ErrFile to fd 1156...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.463325    6560 out.go:298] Setting JSON to false
	I0429 20:22:01.467738    6560 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24060,"bootTime":1714398060,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 20:22:01.467738    6560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 20:22:01.473386    6560 out.go:177] * [multinode-515700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 20:22:01.477900    6560 notify.go:220] Checking for updates...
	I0429 20:22:01.480328    6560 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:22:01.485602    6560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:22:01.488123    6560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 20:22:01.490657    6560 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:22:01.493249    6560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:22:01.496241    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:22:01.497610    6560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:22:06.930154    6560 out.go:177] * Using the hyperv driver based on user configuration
	I0429 20:22:06.933587    6560 start.go:297] selected driver: hyperv
	I0429 20:22:06.933587    6560 start.go:901] validating driver "hyperv" against <nil>
	I0429 20:22:06.933587    6560 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:22:06.986262    6560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 20:22:06.987723    6560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:22:06.988334    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:22:06.988334    6560 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 20:22:06.988334    6560 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 20:22:06.988334    6560 start.go:340] cluster config:
	{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:22:06.988334    6560 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:22:06.992867    6560 out.go:177] * Starting "multinode-515700" primary control-plane node in "multinode-515700" cluster
	I0429 20:22:06.995976    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:22:06.996499    6560 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 20:22:06.996703    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:22:06.996741    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:22:06.996741    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:22:06.996741    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:22:06.996741    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json: {Name:mkdf346f9e30a055d7c79ffb416c8ce539e0c5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:360] acquireMachinesLock for multinode-515700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700"
	I0429 20:22:06.999081    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:22:06.999081    6560 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 20:22:07.006481    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:22:07.006790    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:22:07.006790    6560 client.go:168] LocalClient.Create starting
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:22:09.217702    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:22:09.217822    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:09.217951    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:22:11.056235    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:12.618512    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:16.461966    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:22:17.019827    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: Creating VM...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:20.140355    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:22:20.140483    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:22.004896    6560 main.go:141] libmachine: Creating VHD
	I0429 20:22:22.004896    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:22:25.795387    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DA11902-3EE7-4F99-A00A-752C0686FD99
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:22:25.796445    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:25.796496    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:22:25.796702    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:22:25.814462    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:22:29.034595    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:29.035273    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:29.035337    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -SizeBytes 20000MB
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:31.671427    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-515700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:35.461856    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700 -DynamicMemoryEnabled $false
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:37.723890    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700 -Count 2
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\boot2docker.iso'
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:42.558432    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd'
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:45.265400    6560 main.go:141] libmachine: Starting VM...
	I0429 20:22:45.265400    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:50.732199    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:50.733048    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:50.733149    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:54.308058    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:56.517062    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:59.110985    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:59.111613    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:00.127675    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:02.349860    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:05.987459    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:08.224322    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:10.790333    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:10.791338    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:11.803237    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:14.061252    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:18.855659    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:23:18.855911    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:21.063683    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:23.697335    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:23.697580    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:23.703285    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:23.713965    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:23.713965    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:23:23.854760    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:23:23.854760    6560 buildroot.go:166] provisioning hostname "multinode-515700"
	I0429 20:23:23.854760    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:26.029157    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:26.029995    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:26.030093    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:28.624899    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:28.625217    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:28.625495    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700 && echo "multinode-515700" | sudo tee /etc/hostname
	I0429 20:23:28.799265    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700
	
	I0429 20:23:28.799376    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:30.924333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:33.588985    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:33.589381    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:33.589381    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:23:33.743242    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:23:33.743242    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:23:33.743242    6560 buildroot.go:174] setting up certificates
	I0429 20:23:33.743242    6560 provision.go:84] configureAuth start
	I0429 20:23:33.743939    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:35.885562    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:38.477298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:40.581307    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:43.165623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:43.165853    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:43.165933    6560 provision.go:143] copyHostCerts
	I0429 20:23:43.166093    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:23:43.166093    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:23:43.166093    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:23:43.166722    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:23:43.168141    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:23:43.168305    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:23:43.168305    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:23:43.168887    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:23:43.169614    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:23:43.170245    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:23:43.170340    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:23:43.170731    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:23:43.171712    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700 san=[127.0.0.1 172.17.241.25 localhost minikube multinode-515700]
	I0429 20:23:43.368646    6560 provision.go:177] copyRemoteCerts
	I0429 20:23:43.382882    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:23:43.382882    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:45.539057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:48.109324    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:23:48.217340    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8343588s)
	I0429 20:23:48.217478    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:23:48.218375    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:23:48.267636    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:23:48.267636    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 20:23:48.316493    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:23:48.316784    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:23:48.372851    6560 provision.go:87] duration metric: took 14.6294509s to configureAuth
	I0429 20:23:48.372952    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:23:48.373086    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:23:48.373086    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:50.522765    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:50.522998    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:50.523146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:53.169650    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:53.170462    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:53.170462    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:23:53.302673    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:23:53.302726    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:23:53.302726    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:23:53.302726    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:55.434984    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:58.060160    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:58.061082    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:58.067077    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:58.068199    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:58.068292    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:23:58.226608    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:23:58.227212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:00.358933    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:02.944293    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:02.944373    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:02.950227    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:02.950958    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:02.950958    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:24:05.224184    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:24:05.224184    6560 machine.go:97] duration metric: took 46.3681587s to provisionDockerMachine
	I0429 20:24:05.224184    6560 client.go:171] duration metric: took 1m58.2164577s to LocalClient.Create
	I0429 20:24:05.224184    6560 start.go:167] duration metric: took 1m58.2164577s to libmachine.API.Create "multinode-515700"
	I0429 20:24:05.224184    6560 start.go:293] postStartSetup for "multinode-515700" (driver="hyperv")
	I0429 20:24:05.224184    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:24:05.241199    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:24:05.241199    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:07.393879    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:09.983789    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:09.984033    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:09.984469    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:10.092254    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8510176s)
	I0429 20:24:10.107982    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:24:10.116700    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:24:10.116700    6560 command_runner.go:130] > ID=buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:24:10.116700    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:24:10.116700    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:24:10.116700    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:24:10.117268    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:24:10.118515    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:24:10.118515    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:24:10.132514    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:24:10.152888    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:24:10.201665    6560 start.go:296] duration metric: took 4.9774423s for postStartSetup
	I0429 20:24:10.204966    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:12.345708    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:12.345785    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:12.345855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:14.957675    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:24:14.960758    6560 start.go:128] duration metric: took 2m7.9606641s to createHost
	I0429 20:24:14.962026    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:17.100197    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:17.100281    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:17.100354    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:19.725196    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:19.725860    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:19.725860    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:24:19.867560    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422259.868914581
	
	I0429 20:24:19.867560    6560 fix.go:216] guest clock: 1714422259.868914581
	I0429 20:24:19.867694    6560 fix.go:229] Guest: 2024-04-29 20:24:19.868914581 +0000 UTC Remote: 2024-04-29 20:24:14.9613787 +0000 UTC m=+133.724240401 (delta=4.907535881s)
	I0429 20:24:19.867694    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:22.005967    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:24.588016    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:24.588016    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:24.588016    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422259
	I0429 20:24:24.741766    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:24:19 UTC 2024
	
	I0429 20:24:24.741837    6560 fix.go:236] clock set: Mon Apr 29 20:24:19 UTC 2024
	 (err=<nil>)
	I0429 20:24:24.741837    6560 start.go:83] releasing machines lock for "multinode-515700", held for 2m17.7427319s
	I0429 20:24:24.742129    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:26.884301    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:29.475377    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:29.476046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:29.480912    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:24:29.481639    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:29.493304    6560 ssh_runner.go:195] Run: cat /version.json
	I0429 20:24:29.493304    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:31.702922    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:34.435635    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.436190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.436258    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.480228    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.481073    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.481135    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.531424    6560 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 20:24:34.531720    6560 ssh_runner.go:235] Completed: cat /version.json: (5.0383759s)
	I0429 20:24:34.545943    6560 ssh_runner.go:195] Run: systemctl --version
	I0429 20:24:34.614256    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:24:34.615354    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1343125s)
	I0429 20:24:34.615354    6560 command_runner.go:130] > systemd 252 (252)
	I0429 20:24:34.615354    6560 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 20:24:34.630005    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:24:34.639051    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 20:24:34.639955    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:24:34.653590    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:24:34.683800    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:24:34.683903    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:24:34.683903    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:34.684139    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:34.720958    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:24:34.735137    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:24:34.769077    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:24:34.791121    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:24:34.804751    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:24:34.838781    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.871052    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:24:34.905043    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.940043    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:24:34.975295    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:24:35.009502    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:24:35.044104    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:24:35.078095    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:24:35.099570    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:24:35.114246    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:24:35.146794    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:35.365920    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:24:35.402710    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:35.417050    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Unit]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:24:35.443946    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:24:35.443946    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:24:35.443946    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Service]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Type=notify
	I0429 20:24:35.443946    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:24:35.443946    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:24:35.443946    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:24:35.443946    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:24:35.443946    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:24:35.443946    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:24:35.443946    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:24:35.443946    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:24:35.443946    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:24:35.443946    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:24:35.443946    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:24:35.443946    6560 command_runner.go:130] > Delegate=yes
	I0429 20:24:35.443946    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:24:35.443946    6560 command_runner.go:130] > KillMode=process
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Install]
	I0429 20:24:35.444947    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:24:35.457957    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.500818    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:24:35.548559    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.585869    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.622879    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:24:35.694256    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.721660    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:35.757211    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:24:35.773795    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:24:35.779277    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:24:35.793892    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:24:35.813834    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:24:35.865638    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:24:36.085117    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:24:36.291781    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:24:36.291781    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:24:36.337637    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:36.567033    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:39.106704    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5396504s)
	I0429 20:24:39.121937    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 20:24:39.164421    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.201973    6560 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 20:24:39.432817    6560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 20:24:39.648494    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:39.872471    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 20:24:39.918782    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.959078    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:40.189711    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 20:24:40.314827    6560 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 20:24:40.327765    6560 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 20:24:40.337989    6560 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 20:24:40.338077    6560 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 20:24:40.338077    6560 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Modify: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Change: 2024-04-29 20:24:40.227771386 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] >  Birth: -
	I0429 20:24:40.338228    6560 start.go:562] Will wait 60s for crictl version
	I0429 20:24:40.353543    6560 ssh_runner.go:195] Run: which crictl
	I0429 20:24:40.359551    6560 command_runner.go:130] > /usr/bin/crictl
	I0429 20:24:40.372542    6560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:24:40.422534    6560 command_runner.go:130] > Version:  0.1.0
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeName:  docker
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 20:24:40.422534    6560 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 20:24:40.433531    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.468470    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.477791    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.510922    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.518057    6560 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 20:24:40.518283    6560 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: 172.17.240.1/20
	I0429 20:24:40.538782    6560 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 20:24:40.546082    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:40.569927    6560 kubeadm.go:877] updating cluster {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:24:40.570125    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:24:40.581034    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:40.605162    6560 docker.go:685] Got preloaded images: 
	I0429 20:24:40.605162    6560 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 20:24:40.617894    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:40.637456    6560 command_runner.go:139] > {"Repositories":{}}
	I0429 20:24:40.652557    6560 ssh_runner.go:195] Run: which lz4
	I0429 20:24:40.659728    6560 command_runner.go:130] > /usr/bin/lz4
	I0429 20:24:40.659728    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 20:24:40.676390    6560 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:24:40.682600    6560 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 20:24:43.151463    6560 docker.go:649] duration metric: took 2.4917153s to copy over tarball
	I0429 20:24:43.166991    6560 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:24:51.777678    6560 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6106197s)
	I0429 20:24:51.777678    6560 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:24:51.848689    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:51.869772    6560 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0429 20:24:51.869772    6560 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 20:24:51.923721    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:52.150884    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:55.504316    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3534062s)
	I0429 20:24:55.515091    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:24:55.540357    6560 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 20:24:55.540357    6560 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:24:55.540557    6560 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 20:24:55.540557    6560 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:24:55.540557    6560 kubeadm.go:928] updating node { 172.17.241.25 8443 v1.30.0 docker true true} ...
	I0429 20:24:55.540557    6560 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-515700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.241.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:24:55.550945    6560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 20:24:55.586940    6560 command_runner.go:130] > cgroupfs
	I0429 20:24:55.587354    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:24:55.587354    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:24:55.587354    6560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:24:55.587354    6560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.241.25 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-515700 NodeName:multinode-515700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.241.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.241.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:24:55.587882    6560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.241.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-515700"
	  kubeletExtraArgs:
	    node-ip: 172.17.241.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.241.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:24:55.601173    6560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubeadm
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubectl
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubelet
	I0429 20:24:55.622022    6560 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:24:55.633924    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:24:55.654273    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 20:24:55.692289    6560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:24:55.726319    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:24:55.774801    6560 ssh_runner.go:195] Run: grep 172.17.241.25	control-plane.minikube.internal$ /etc/hosts
	I0429 20:24:55.781653    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.241.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:55.820570    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:56.051044    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:24:56.087660    6560 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700 for IP: 172.17.241.25
	I0429 20:24:56.087753    6560 certs.go:194] generating shared ca certs ...
	I0429 20:24:56.087824    6560 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 20:24:56.089063    6560 certs.go:256] generating profile certs ...
	I0429 20:24:56.089855    6560 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key
	I0429 20:24:56.089855    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt with IP's: []
	I0429 20:24:56.283640    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt ...
	I0429 20:24:56.284633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt: {Name:mk1286f657dae134d1e4806ec4fc1d780c02f0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.285633    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key ...
	I0429 20:24:56.285633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key: {Name:mka98d4501f3f942abed1092b1c97c4a2bbd30cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.286633    6560 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d
	I0429 20:24:56.287300    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.241.25]
	I0429 20:24:56.456862    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d ...
	I0429 20:24:56.456862    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d: {Name:mk09d828589f59d94791e90fc999c9ce1101118e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d ...
	I0429 20:24:56.458476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d: {Name:mk92ebf0409a99e4a3e3430ff86080f164f4bc0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458796    6560 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt
	I0429 20:24:56.473961    6560 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key
	I0429 20:24:56.474965    6560 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key
	I0429 20:24:56.474965    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt with IP's: []
	I0429 20:24:56.680472    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt ...
	I0429 20:24:56.680472    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt: {Name:mkc600562c7738e3eec9de4025428e3048df463a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.682476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key ...
	I0429 20:24:56.682476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key: {Name:mkc9ba6e1afbc9ca05ce8802b568a72bfd19a90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 20:24:56.685482    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 20:24:56.693323    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 20:24:56.701358    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 20:24:56.702409    6560 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 20:24:56.702718    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 20:24:56.702843    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 20:24:56.705315    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:24:56.758912    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 20:24:56.809584    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:24:56.860874    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:24:56.918708    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:24:56.969377    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:24:57.018903    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:24:57.070438    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:24:57.119823    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:24:57.168671    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 20:24:57.216697    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 20:24:57.263605    6560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:24:57.314590    6560 ssh_runner.go:195] Run: openssl version
	I0429 20:24:57.325614    6560 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 20:24:57.340061    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:24:57.374639    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.394971    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.404667    6560 command_runner.go:130] > b5213941
	I0429 20:24:57.419162    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:24:57.454540    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 20:24:57.494441    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.501867    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.502224    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.517134    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.527174    6560 command_runner.go:130] > 51391683
	I0429 20:24:57.544472    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 20:24:57.579789    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 20:24:57.613535    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622605    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622696    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.637764    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.649176    6560 command_runner.go:130] > 3ec20f2e
	I0429 20:24:57.665410    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:24:57.708796    6560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:24:57.716466    6560 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717133    6560 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717510    6560 kubeadm.go:391] StartCluster: {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:24:57.729105    6560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 20:24:57.771112    6560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 20:24:57.792952    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0429 20:24:57.807601    6560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:24:57.837965    6560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 20:24:57.856820    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:156] found existing configuration files:
	
	I0429 20:24:57.872870    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:24:57.892109    6560 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.892549    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.905782    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:24:57.939062    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:24:57.957607    6560 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.957753    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.972479    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:24:58.006849    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.025918    6560 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.025918    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.039054    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.072026    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:24:58.092314    6560 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.092673    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.105776    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:24:58.124274    6560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:24:58.562957    6560 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:24:58.562957    6560 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:25:12.186137    6560 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186137    6560 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186277    6560 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 20:25:12.186320    6560 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:25:12.186516    6560 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.187085    6560 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.187131    6560 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.190071    6560 out.go:204]   - Generating certificates and keys ...
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190667    6560 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.191251    6560 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191251    6560 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 20:25:12.193040    6560 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193086    6560 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193701    6560 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193701    6560 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.198949    6560 out.go:204]   - Booting up control plane ...
	I0429 20:25:12.193843    6560 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199855    6560 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:25:12.200494    6560 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200494    6560 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201207    6560 command_runner.go:130] > [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.202201    6560 command_runner.go:130] > [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 out.go:204]   - Configuring RBAC rules ...
	I0429 20:25:12.202201    6560 command_runner.go:130] > [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.204361    6560 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.206433    6560 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206433    6560 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206983    6560 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 kubeadm.go:309] 
	I0429 20:25:12.207142    6560 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 kubeadm.go:309] 
	I0429 20:25:12.207365    6560 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207404    6560 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207464    6560 kubeadm.go:309] 
	I0429 20:25:12.207514    6560 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0429 20:25:12.207589    6560 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:25:12.207764    6560 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.207807    6560 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.208030    6560 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 kubeadm.go:309] 
	I0429 20:25:12.208230    6560 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208230    6560 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208281    6560 kubeadm.go:309] 
	I0429 20:25:12.208375    6560 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208375    6560 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208442    6560 kubeadm.go:309] 
	I0429 20:25:12.208643    6560 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0429 20:25:12.208733    6560 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:25:12.208874    6560 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.208936    6560 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.209014    6560 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209014    6560 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209735    6560 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209735    6560 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--control-plane 
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--control-plane 
	I0429 20:25:12.210277    6560 kubeadm.go:309] 
	I0429 20:25:12.210538    6560 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] 
	I0429 20:25:12.210726    6560 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210726    6560 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210937    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:25:12.211197    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:25:12.215717    6560 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 20:25:12.234164    6560 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 20:25:12.242817    6560 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: 2024-04-29 20:23:14.801002600 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Change: 2024-04-29 20:23:06.257000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] >  Birth: -
	I0429 20:25:12.242817    6560 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 20:25:12.242817    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 20:25:12.301387    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 20:25:13.060621    6560 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > serviceaccount/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > daemonset.apps/kindnet created
	I0429 20:25:13.060707    6560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-515700 minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=multinode-515700 minikube.k8s.io/primary=true
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.092072    6560 command_runner.go:130] > -16
	I0429 20:25:13.092113    6560 ops.go:34] apiserver oom_adj: -16
	I0429 20:25:13.290753    6560 command_runner.go:130] > node/multinode-515700 labeled
	I0429 20:25:13.292700    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0429 20:25:13.306335    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.426974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:13.819653    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.947766    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.320587    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.442246    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.822864    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.943107    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.309117    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.432718    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.814070    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.933861    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.317878    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.440680    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.819594    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.942387    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.322995    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.435199    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.809136    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.932465    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.308164    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.429047    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.808817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.928476    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.310090    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.432479    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.815590    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.929079    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.321723    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.442512    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.819466    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.933742    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.309314    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.424974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.811819    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.952603    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.316794    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.432125    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.808890    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.925838    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.310021    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.434432    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.819369    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.948876    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.307817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.457947    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.818980    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.932003    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:25.308659    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:25.488149    6560 command_runner.go:130] > NAME      SECRETS   AGE
	I0429 20:25:25.488217    6560 command_runner.go:130] > default   0         1s
	I0429 20:25:25.489686    6560 kubeadm.go:1107] duration metric: took 12.4288824s to wait for elevateKubeSystemPrivileges
	W0429 20:25:25.489686    6560 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:25:25.489686    6560 kubeadm.go:393] duration metric: took 27.7719601s to StartCluster
	I0429 20:25:25.490694    6560 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.490694    6560 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:25.491677    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.493697    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 20:25:25.493697    6560 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:25:25.498680    6560 out.go:177] * Verifying Kubernetes components...
	I0429 20:25:25.493697    6560 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:25:25.494664    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:25.504657    6560 addons.go:69] Setting storage-provisioner=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:69] Setting default-storageclass=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:234] Setting addon storage-provisioner=true in "multinode-515700"
	I0429 20:25:25.504657    6560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-515700"
	I0429 20:25:25.504657    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.520673    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:25:25.944109    6560 command_runner.go:130] > apiVersion: v1
	I0429 20:25:25.944267    6560 command_runner.go:130] > data:
	I0429 20:25:25.944267    6560 command_runner.go:130] >   Corefile: |
	I0429 20:25:25.944367    6560 command_runner.go:130] >     .:53 {
	I0429 20:25:25.944367    6560 command_runner.go:130] >         errors
	I0429 20:25:25.944367    6560 command_runner.go:130] >         health {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            lameduck 5s
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         ready
	I0429 20:25:25.944367    6560 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            pods insecure
	I0429 20:25:25.944367    6560 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0429 20:25:25.944367    6560 command_runner.go:130] >            ttl 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         prometheus :9153
	I0429 20:25:25.944367    6560 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            max_concurrent 1000
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         cache 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loop
	I0429 20:25:25.944367    6560 command_runner.go:130] >         reload
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loadbalance
	I0429 20:25:25.944367    6560 command_runner.go:130] >     }
	I0429 20:25:25.944367    6560 command_runner.go:130] > kind: ConfigMap
	I0429 20:25:25.944367    6560 command_runner.go:130] > metadata:
	I0429 20:25:25.944367    6560 command_runner.go:130] >   creationTimestamp: "2024-04-29T20:25:11Z"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   name: coredns
	I0429 20:25:25.944367    6560 command_runner.go:130] >   namespace: kube-system
	I0429 20:25:25.944367    6560 command_runner.go:130] >   resourceVersion: "265"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   uid: af2c186a-a14a-4671-8545-05c5ef5d4a89
	I0429 20:25:25.949389    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 20:25:26.023682    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:25:26.408680    6560 command_runner.go:130] > configmap/coredns replaced
	I0429 20:25:26.414254    6560 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.417677    6560 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 20:25:26.417677    6560 node_ready.go:35] waiting up to 6m0s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.435291    6560 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.437034    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.438430    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Audit-Id: a2ae57e5-53a3-4342-ad5c-c2149e87ef04
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438430    6560 round_trippers.go:580]     Audit-Id: 2e6b22a8-9874-417c-a6a5-f7b7437121f7
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438909    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.439086    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:26.440203    6560 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.440298    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.440406    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.440406    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.440519    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:26.440519    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.459913    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.459962    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Audit-Id: 9ca07d91-957f-4992-9642-97b01e07dde3
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.459962    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"393","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.918339    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.918339    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918339    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918339    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.918300    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.918498    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918580    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918580    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Audit-Id: 70383541-35df-461a-b4fb-41bd3b56f11d
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Audit-Id: e628428d-1384-4709-a32e-084c9dfec614
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.929164    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"404","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.929400    6560 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-515700" context rescaled to 1 replicas
	I0429 20:25:26.929400    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.426913    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.426913    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.426913    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.426913    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.430795    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:27.430795    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Audit-Id: e4e6b2b1-e008-4f2a-bae4-3596fce97666
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.430996    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.431340    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.789217    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.789348    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.792426    6560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:25:27.791141    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:27.795103    6560 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:27.795205    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:25:27.795205    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.795205    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:27.795924    6560 addons.go:234] Setting addon default-storageclass=true in "multinode-515700"
	I0429 20:25:27.795924    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:27.796802    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.922993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.923088    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.923175    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.923175    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.929435    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:27.929435    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.929545    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.929545    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.929638    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Audit-Id: 8ef77f9f-d18f-4fa7-ab77-85c137602c84
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.930046    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.432611    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.432611    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.432611    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.432611    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.441320    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:28.441862    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Audit-Id: d32cd9f8-494c-4a69-b028-606c7d354657
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.442308    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.442914    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:28.927674    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.927674    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.927674    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.927897    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.933213    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:28.933794    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Audit-Id: 75d40b2c-c2ed-4221-9361-88591791a649
	I0429 20:25:28.934208    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.422724    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.422898    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.422898    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.422975    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.426431    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:29.426876    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Audit-Id: dde47b6c-069b-408d-a5c6-0a2ea7439643
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.427261    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.918308    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.918308    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.918308    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.918407    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.921072    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:29.921072    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.921072    6560 round_trippers.go:580]     Audit-Id: d4643df6-68ad-4c4c-9604-a5a4d019fba1
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.922076    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.057466    6560 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:30.057636    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:25:30.057750    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:30.145026    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:30.424041    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.424310    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.424310    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.424310    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.428606    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.429051    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.429263    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Audit-Id: 2c59a467-8079-41ed-ac1d-f96dd660d343
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.429435    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.931993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.931993    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.931993    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.931993    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.936635    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.936635    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.937644    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Audit-Id: 9214de5b-8221-4c68-b6b9-a92fe7d41fd1
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.938175    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.939066    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:31.423866    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.423866    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.423866    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.423988    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.427054    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.427827    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.427827    6560 round_trippers.go:580]     Audit-Id: 5f66acb8-ef38-4220-83b6-6e87fbec6f58
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.427869    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:31.932664    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.932664    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.932761    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.932761    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.936680    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.936680    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Audit-Id: f9fb721e-ccaf-4e33-ac69-8ed840761191
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.937009    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.312723    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:32.424680    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.424953    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.424953    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.424953    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.428624    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:32.428906    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.428906    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Audit-Id: d3a39f3a-571d-46c0-a442-edf136da8a11
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.429531    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.858444    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:32.926226    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.926317    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.926393    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.926393    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.929204    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:32.929583    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.929583    6560 round_trippers.go:580]     Audit-Id: 55fc987d-65c0-4ac8-95d2-7fa4185e179b
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.929734    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.930327    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.034553    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:33.425759    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.425833    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.425833    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.425833    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.428624    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.429656    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Audit-Id: d581fce7-8906-48d7-8e13-2d1aba9dec04
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.429916    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.430438    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:33.930984    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.931053    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.931053    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.931053    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.933717    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.933717    6560 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.933717    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.933717    6560 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Audit-Id: 680ed792-db71-4b29-abb9-40f7154e8b1e
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.933717    6560 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0429 20:25:33.933717    6560 command_runner.go:130] > pod/storage-provisioner created
	I0429 20:25:33.933717    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.428102    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.428102    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.428102    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.428102    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.431722    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:34.432624    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Audit-Id: 86cc0608-3000-42b0-9ce8-4223e32d60c3
	I0429 20:25:34.432684    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.433082    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.932029    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.932316    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.932316    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.932316    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.936749    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:34.936749    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Audit-Id: 0e63a4db-3dd4-4e74-8b79-c019b6b97e89
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.937415    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:35.024893    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:35.025151    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:35.025317    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:35.170634    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:35.371184    6560 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0429 20:25:35.371418    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 20:25:35.371571    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.371571    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.371571    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.380781    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:35.381213    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Audit-Id: 31f5e265-3d38-4520-88d0-33f47325189c
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Length: 1273
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.381380    6560 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 20:25:35.382106    6560 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.382183    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 20:25:35.382183    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:35.382269    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.390758    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.390758    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.390758    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Audit-Id: 4dbb716e-2d97-4c38-b342-f63e7d38a4d0
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Length: 1220
	I0429 20:25:35.391190    6560 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.395279    6560 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 20:25:35.397530    6560 addons.go:505] duration metric: took 9.9037568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 20:25:35.421733    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.421733    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.421733    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.421733    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.452743    6560 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 20:25:35.452743    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.452743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Audit-Id: 316d0393-7ba5-4629-87cb-7ae54d0ea965
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.454477    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.455068    6560 node_ready.go:49] node "multinode-515700" has status "Ready":"True"
	I0429 20:25:35.455148    6560 node_ready.go:38] duration metric: took 9.0374019s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:35.455148    6560 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:35.455213    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:35.455213    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.455213    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.455213    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.473128    6560 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 20:25:35.473128    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Audit-Id: 81e159c0-b703-47ba-a9f3-82cc907b8705
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.475820    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"431","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0429 20:25:35.481714    6560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:35.482325    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.482379    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.482379    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.482432    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.491093    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.491093    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Audit-Id: a2eb7ca2-d415-4a7c-a1f0-1ac743bd8f82
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.492090    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.493335    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.493335    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.493335    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.493419    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.496084    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:35.496084    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.496084    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Audit-Id: f61c97ad-ee7a-4666-9244-d7d2091b5d09
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.497097    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.497131    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.497332    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.991323    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.991323    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.991323    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.991323    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.995451    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:35.995451    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Audit-Id: faa8a1a4-279f-4dc3-99c8-8c3b9e9ed746
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:35.996592    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.997239    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.997292    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.997292    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.997292    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.999987    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:35.999987    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Audit-Id: 070c7fff-f707-4b9a-9aef-031cedc68a8c
	I0429 20:25:36.000411    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.483004    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.483004    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.483004    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.483004    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.488152    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:36.488152    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.488152    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.488678    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Audit-Id: fb5cc675-b39d-4cb0-ba8c-24140b3d95e8
	I0429 20:25:36.489818    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.490926    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.490926    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.490985    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.490985    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.494654    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:36.494654    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Audit-Id: fe6d880a-4cf8-4b10-8c7f-debde123eafc
	I0429 20:25:36.495423    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.991643    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.991643    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.991643    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.991855    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.996384    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:36.996384    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Audit-Id: 933a6dd5-a0f7-4380-8189-3e378a8a620d
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:36.997332    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.997760    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.997760    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.997760    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.997760    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.000889    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.000889    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.001211    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Audit-Id: 0342e743-45a6-4fd7-97be-55a766946396
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.001274    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.001759    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.495712    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:37.495712    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.495712    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.495712    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.508671    6560 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 20:25:37.508671    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Audit-Id: d30c6154-a41b-4a0d-976f-d19f40e67223
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.508671    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0429 20:25:37.510663    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.510663    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.510663    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.510663    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.513686    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.513686    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Audit-Id: 397b83a5-95f9-4df8-a76b-042ecc96922c
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.514662    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.514662    6560 pod_ready.go:92] pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.514662    6560 pod_ready.go:81] duration metric: took 2.0329329s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-515700
	I0429 20:25:37.514662    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.514662    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.514662    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.517691    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.517691    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Audit-Id: df53f071-06ed-4797-a51b-7d01b84cac86
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.518412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-515700","namespace":"kube-system","uid":"85f2dc9a-17b5-413c-ab83-d3cbe955571e","resourceVersion":"319","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.241.25:2379","kubernetes.io/config.hash":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.mirror":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.seen":"2024-04-29T20:25:11.718525866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0429 20:25:37.519044    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.519044    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.519124    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.519124    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.521788    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.521788    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Audit-Id: ee5fdb3e-9869-4cd7-996a-a25b453aeb6b
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.521944    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.522769    6560 pod_ready.go:92] pod "etcd-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.522844    6560 pod_ready.go:81] duration metric: took 8.1819ms for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.522844    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.523015    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-515700
	I0429 20:25:37.523015    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.523079    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.523079    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.525575    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.525575    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Audit-Id: cd9d851c-f606-48c9-8da3-3d194ab5464f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.526015    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-515700","namespace":"kube-system","uid":"f5a212eb-87a9-476a-981a-9f31731f39e6","resourceVersion":"312","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.241.25:8443","kubernetes.io/config.hash":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.mirror":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.seen":"2024-04-29T20:25:11.718530866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0429 20:25:37.526356    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.526356    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.526356    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.526356    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.535954    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:37.535954    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Audit-Id: 018aa21f-d408-4777-b54c-eb7aa2295a34
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.536470    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.536974    6560 pod_ready.go:92] pod "kube-apiserver-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.537034    6560 pod_ready.go:81] duration metric: took 14.0881ms for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537034    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537183    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-515700
	I0429 20:25:37.537276    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.537297    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.537297    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.539964    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.539964    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Audit-Id: d3232756-fc07-4b33-a3b5-989d2932cec4
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.541274    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-515700","namespace":"kube-system","uid":"2c9ba563-c2af-45b7-bc1e-bf39759a339b","resourceVersion":"315","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.mirror":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.seen":"2024-04-29T20:25:11.718532066Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0429 20:25:37.541935    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.541935    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.541935    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.541935    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.555960    6560 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 20:25:37.555960    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Audit-Id: 2d117219-3b1a-47fe-99a4-7e5aea7e84d3
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.555960    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.555960    6560 pod_ready.go:92] pod "kube-controller-manager-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.555960    6560 pod_ready.go:81] duration metric: took 18.9251ms for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.555960    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.556943    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gx5x
	I0429 20:25:37.556943    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.556943    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.556943    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.559965    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.560477    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.560477    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Audit-Id: 14e6d1be-eac6-4f20-9621-b409c951fae1
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.560781    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6gx5x","generateName":"kube-proxy-","namespace":"kube-system","uid":"886ac698-7e9b-431b-b822-577331b02f41","resourceVersion":"407","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"027f1d05-009f-4199-81e9-45b0a2d3710f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"027f1d05-009f-4199-81e9-45b0a2d3710f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0429 20:25:37.561552    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.561581    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.561581    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.561581    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.567713    6560 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 20:25:37.567713    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Audit-Id: 678df177-6944-4d30-b889-62528c06bab2
	I0429 20:25:37.567713    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.568391    6560 pod_ready.go:92] pod "kube-proxy-6gx5x" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.568391    6560 pod_ready.go:81] duration metric: took 12.4313ms for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.568391    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.701559    6560 request.go:629] Waited for 132.9214ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701779    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701853    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.701853    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.701853    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.705314    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.706129    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.706129    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.706183    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Audit-Id: 4fb010ad-4d68-4aa0-9ba4-f68d04faa9e8
	I0429 20:25:37.706412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-515700","namespace":"kube-system","uid":"096d3e94-25ba-49b3-b329-6fb47fc88f25","resourceVersion":"334","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.mirror":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.seen":"2024-04-29T20:25:11.718533166Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0429 20:25:37.905204    6560 request.go:629] Waited for 197.8802ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.905322    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.905466    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.909057    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.909159    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Audit-Id: a6cecf7e-83ad-4d5f-8cbb-a65ced7e83ce
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.909286    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.909697    6560 pod_ready.go:92] pod "kube-scheduler-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.909697    6560 pod_ready.go:81] duration metric: took 341.3037ms for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.909697    6560 pod_ready.go:38] duration metric: took 2.4545299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:37.909697    6560 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:25:37.923721    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:25:37.956142    6560 command_runner.go:130] > 2047
	I0429 20:25:37.956226    6560 api_server.go:72] duration metric: took 12.462433s to wait for apiserver process to appear ...
	I0429 20:25:37.956226    6560 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:25:37.956330    6560 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:25:37.965150    6560 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:25:37.965332    6560 round_trippers.go:463] GET https://172.17.241.25:8443/version
	I0429 20:25:37.965364    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.965364    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.965364    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.967124    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:37.967124    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Audit-Id: c3b17e5f-8eb5-4422-bcd1-48cea5831311
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.967423    6560 round_trippers.go:580]     Content-Length: 263
	I0429 20:25:37.967423    6560 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 20:25:37.967530    6560 api_server.go:141] control plane version: v1.30.0
	I0429 20:25:37.967530    6560 api_server.go:131] duration metric: took 11.2306ms to wait for apiserver health ...
	I0429 20:25:37.967629    6560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:25:38.109818    6560 request.go:629] Waited for 142.1878ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110201    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110256    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.110275    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.110275    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.118070    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.118070    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Audit-Id: 557b3073-d14e-4919-8133-995d5b042d22
	I0429 20:25:38.119823    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.123197    6560 system_pods.go:59] 8 kube-system pods found
	I0429 20:25:38.123197    6560 system_pods.go:61] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.123197    6560 system_pods.go:74] duration metric: took 155.566ms to wait for pod list to return data ...
	I0429 20:25:38.123197    6560 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:25:38.295950    6560 request.go:629] Waited for 172.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.296300    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.296300    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.300424    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:38.300424    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Content-Length: 261
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Audit-Id: 7466bf5b-fa07-4a6b-bc06-274738fc9259
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.300674    6560 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"13c4332f-9236-4f04-9e46-f5a98bc3d731","resourceVersion":"343","creationTimestamp":"2024-04-29T20:25:24Z"}}]}
	I0429 20:25:38.300674    6560 default_sa.go:45] found service account: "default"
	I0429 20:25:38.300674    6560 default_sa.go:55] duration metric: took 177.4758ms for default service account to be created ...
	I0429 20:25:38.300674    6560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:25:38.498686    6560 request.go:629] Waited for 197.291ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.498782    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.499005    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.499005    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.499005    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.506756    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.507387    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Audit-Id: ffc5efdb-4263-4450-8ff2-c1bb3f979300
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.507485    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.507503    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.508809    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.512231    6560 system_pods.go:86] 8 kube-system pods found
	I0429 20:25:38.512305    6560 system_pods.go:89] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.512305    6560 system_pods.go:89] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.512451    6560 system_pods.go:89] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.512451    6560 system_pods.go:126] duration metric: took 211.7756ms to wait for k8s-apps to be running ...
	I0429 20:25:38.512451    6560 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:25:38.526027    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:25:38.555837    6560 system_svc.go:56] duration metric: took 43.3852ms WaitForService to wait for kubelet
	I0429 20:25:38.555837    6560 kubeadm.go:576] duration metric: took 13.0620394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:25:38.556007    6560 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:25:38.701455    6560 request.go:629] Waited for 145.1917ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701896    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701917    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.701917    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.702032    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.709221    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.709221    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Audit-Id: 9241b2a0-c483-4bfb-8a19-8f5c9b610b53
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.709221    6560 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0429 20:25:38.710061    6560 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:25:38.710061    6560 node_conditions.go:123] node cpu capacity is 2
	I0429 20:25:38.710061    6560 node_conditions.go:105] duration metric: took 154.0529ms to run NodePressure ...
	I0429 20:25:38.710061    6560 start.go:240] waiting for startup goroutines ...
	I0429 20:25:38.710061    6560 start.go:245] waiting for cluster config update ...
	I0429 20:25:38.710061    6560 start.go:254] writing updated cluster config ...
	I0429 20:25:38.717493    6560 out.go:177] 
	I0429 20:25:38.721129    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.735840    6560 out.go:177] * Starting "multinode-515700-m02" worker node in "multinode-515700" cluster
	I0429 20:25:38.738518    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:25:38.738518    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:25:38.738983    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:25:38.739240    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:25:38.739481    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.745029    6560 start.go:360] acquireMachinesLock for multinode-515700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:25:38.745029    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700-m02"
	I0429 20:25:38.745029    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 20:25:38.745575    6560 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 20:25:38.748852    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:25:38.748852    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:25:38.748852    6560 client.go:168] LocalClient.Create starting
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:40.746212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:25:42.605453    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:47.992432    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:47.992702    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:47.996014    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:25:48.551162    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: Creating VM...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:51.874174    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:25:51.874221    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:53.736899    6560 main.go:141] libmachine: Creating VHD
	I0429 20:25:53.737514    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D65FFD0C-285E-44D0-8723-21544BDDE71A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:25:57.529054    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:00.734035    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -SizeBytes 20000MB
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:03.314283    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-515700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700-m02 -DynamicMemoryEnabled $false
	I0429 20:26:09.480100    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700-m02 -Count 2
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:11.716979    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\boot2docker.iso'
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:14.377298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd'
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:17.090909    6560 main.go:141] libmachine: Starting VM...
	I0429 20:26:17.090909    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700-m02
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:22.527096    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:26.113296    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:28.339433    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:30.953587    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:30.953628    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:31.955478    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:34.197688    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:34.197831    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:34.197901    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:37.817016    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:42.682666    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:42.683603    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:43.685897    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:48.604877    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:48.604915    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:48.604999    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:50.794876    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:50.795093    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:50.795407    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:26:50.795407    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:52.992195    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:52.992243    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:52.992331    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:55.630552    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:55.641728    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:26:55.642758    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:26:55.769182    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:26:55.769182    6560 buildroot.go:166] provisioning hostname "multinode-515700-m02"
	I0429 20:26:55.769333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:57.942857    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:57.943721    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:57.943789    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:00.610012    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:00.610498    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:00.617342    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:00.618022    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:00.618022    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700-m02 && echo "multinode-515700-m02" | sudo tee /etc/hostname
	I0429 20:27:00.774430    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700-m02
	
	I0429 20:27:00.775391    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:02.970796    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:02.971352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:02.971577    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:05.640782    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:05.640782    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:05.640782    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:27:05.779330    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:27:05.779330    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:27:05.779435    6560 buildroot.go:174] setting up certificates
	I0429 20:27:05.779435    6560 provision.go:84] configureAuth start
	I0429 20:27:05.779531    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:07.939785    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:10.607752    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:10.608236    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:10.608319    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:15.428095    6560 provision.go:143] copyHostCerts
	I0429 20:27:15.429066    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:27:15.429066    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:27:15.429066    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:27:15.429626    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:27:15.430936    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:27:15.431366    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:27:15.431366    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:27:15.431875    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:27:15.432822    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:27:15.433064    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:27:15.433064    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:27:15.433498    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:27:15.434807    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700-m02 san=[127.0.0.1 172.17.253.145 localhost minikube multinode-515700-m02]
	I0429 20:27:15.511954    6560 provision.go:177] copyRemoteCerts
	I0429 20:27:15.527105    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:27:15.527105    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:20.368198    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:20.368587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:20.368930    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:20.467819    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9406764s)
	I0429 20:27:20.468832    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:27:20.469887    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:27:20.524889    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:27:20.525559    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 20:27:20.578020    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:27:20.578217    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:27:20.634803    6560 provision.go:87] duration metric: took 14.8552541s to configureAuth
	I0429 20:27:20.634874    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:27:20.635533    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:27:20.635638    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:22.779762    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:25.427345    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:25.427345    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:25.428345    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:27:25.562050    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:27:25.562195    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:27:25.562515    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:27:25.562592    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:30.404141    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:30.405195    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:30.412105    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:30.413171    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:30.413700    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.241.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:27:30.578477    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.241.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:27:30.578477    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:32.772580    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:35.465933    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:35.466426    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:35.466509    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:27:37.701893    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:27:37.701981    6560 machine.go:97] duration metric: took 46.9062133s to provisionDockerMachine
	I0429 20:27:37.702052    6560 client.go:171] duration metric: took 1m58.9522849s to LocalClient.Create
	I0429 20:27:37.702194    6560 start.go:167] duration metric: took 1m58.9524269s to libmachine.API.Create "multinode-515700"
	I0429 20:27:37.702194    6560 start.go:293] postStartSetup for "multinode-515700-m02" (driver="hyperv")
	I0429 20:27:37.702194    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:27:37.716028    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:27:37.716028    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:39.888498    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:39.889355    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:39.889707    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:42.576527    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:42.688245    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9721792s)
	I0429 20:27:42.703472    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:27:42.710185    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:27:42.710391    6560 command_runner.go:130] > ID=buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:27:42.710391    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:27:42.710562    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:27:42.710562    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:27:42.710640    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:27:42.712121    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:27:42.712121    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:27:42.725734    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:27:42.745571    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:27:42.798223    6560 start.go:296] duration metric: took 5.0959902s for postStartSetup
	I0429 20:27:42.801718    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:44.985225    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:47.630520    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:27:47.633045    6560 start.go:128] duration metric: took 2m8.8864784s to createHost
	I0429 20:27:47.633167    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:49.823309    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:49.823412    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:49.823495    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:52.524084    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:52.524183    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:52.530451    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:52.531204    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:52.531204    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:27:52.658970    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422472.660345683
	
	I0429 20:27:52.659208    6560 fix.go:216] guest clock: 1714422472.660345683
	I0429 20:27:52.659208    6560 fix.go:229] Guest: 2024-04-29 20:27:52.660345683 +0000 UTC Remote: 2024-04-29 20:27:47.6330452 +0000 UTC m=+346.394263801 (delta=5.027300483s)
	I0429 20:27:52.659208    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:57.461861    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:57.461927    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:57.467747    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:57.468699    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:57.468699    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422472
	I0429 20:27:57.617018    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:27:52 UTC 2024
	
	I0429 20:27:57.617018    6560 fix.go:236] clock set: Mon Apr 29 20:27:52 UTC 2024
	 (err=<nil>)
	I0429 20:27:57.617018    6560 start.go:83] releasing machines lock for "multinode-515700-m02", held for 2m18.8709228s
	I0429 20:27:57.618122    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:59.795247    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:02.475615    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:02.475867    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:02.479078    6560 out.go:177] * Found network options:
	I0429 20:28:02.481434    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.483990    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.486147    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.488513    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 20:28:02.490094    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.492090    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:28:02.492090    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:02.504078    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:28:02.504078    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:07.440744    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.440938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.441026    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.467629    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.629032    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:28:07.630105    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1379759s)
	I0429 20:28:07.630105    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0429 20:28:07.630229    6560 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1259881s)
	W0429 20:28:07.630229    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:28:07.649597    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:28:07.685721    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:28:07.685954    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:28:07.685954    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:07.686227    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:07.722613    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:28:07.736060    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:28:07.771561    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:28:07.793500    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:28:07.809715    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:28:07.846242    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.882404    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:28:07.918280    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.956186    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:28:07.994072    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:28:08.029701    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:28:08.067417    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:28:08.104772    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:28:08.126209    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:28:08.140685    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:28:08.181598    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:08.410362    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:28:08.449856    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:08.466974    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Unit]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:28:08.492900    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:28:08.492900    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:28:08.492900    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Service]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Type=notify
	I0429 20:28:08.492900    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:28:08.492900    6560 command_runner.go:130] > Environment=NO_PROXY=172.17.241.25
	I0429 20:28:08.492900    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:28:08.492900    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:28:08.492900    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:28:08.492900    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:28:08.492900    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:28:08.492900    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:28:08.492900    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:28:08.492900    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:28:08.492900    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:28:08.493891    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:28:08.493891    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:28:08.493891    6560 command_runner.go:130] > Delegate=yes
	I0429 20:28:08.493891    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:28:08.493891    6560 command_runner.go:130] > KillMode=process
	I0429 20:28:08.493891    6560 command_runner.go:130] > [Install]
	I0429 20:28:08.493891    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:28:08.505928    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.548562    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:28:08.606977    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.652185    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.695349    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:28:08.785230    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.816602    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:08.853434    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:28:08.870019    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:28:08.876256    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:28:08.890247    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:28:08.911471    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:28:08.962890    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:28:09.201152    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:28:09.397561    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:28:09.398166    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:28:09.451159    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:09.673084    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:29:10.809648    6560 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 20:29:10.809648    6560 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 20:29:10.809648    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1361028s)
	I0429 20:29:10.827248    6560 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 20:29:10.852173    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 20:29:10.852279    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 20:29:10.852319    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 20:29:10.852739    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854100    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854118    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854142    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854175    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 20:29:10.865335    6560 out.go:177] 
	W0429 20:29:10.865335    6560 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 20:29:10.865335    6560 out.go:239] * 
	W0429 20:29:10.869400    6560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:29:10.876700    6560 out.go:177] 
	
	
	==> Docker <==
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.461843314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.461859614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.462658312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.631948639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.632114139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.632143439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:25:36 multinode-515700 dockerd[1331]: time="2024-04-29T20:25:36.632332338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:32 multinode-515700 dockerd[1325]: 2024/04/29 20:29:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:32 multinode-515700 dockerd[1325]: 2024/04/29 20:29:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.311678535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.311805235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.311843635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.314238729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:49 multinode-515700 cri-dockerd[1230]: time="2024-04-29T20:29:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1a58f6d29ec95da5888905a6941e048b2c50f12c8ae76975e21ae109c16a8bb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 20:29:50 multinode-515700 cri-dockerd[1230]: time="2024-04-29T20:29:50Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.935705225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.935856331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.935874732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.936415956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	32c6f043cec2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   e1a58f6d29ec9       busybox-fc5497c4f-dv5v8
	15da1b832ef20       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   73ab97e30d3d0       coredns-7db6d8ff4d-drcsj
	b26e455e6f823       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   0274116a036cf       storage-provisioner
	11141cf0a01e5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              16 minutes ago      Running             kindnet-cni               0                   5c226cf922db1       kindnet-lt84t
	8d116812e2fa7       a0bf559e280cf                                                                                         16 minutes ago      Running             kube-proxy                0                   c4e88976a7bb5       kube-proxy-6gx5x
	9b9ad8fbed853       c42f13656d0b2                                                                                         17 minutes ago      Running             kube-apiserver            0                   e1040c321d522       kube-apiserver-multinode-515700
	7748681b165fb       259c8277fcbbc                                                                                         17 minutes ago      Running             kube-scheduler            0                   ab47450efbe05       kube-scheduler-multinode-515700
	01f30fac305bc       3861cfcd7c04c                                                                                         17 minutes ago      Running             etcd                      0                   b5202cca492c4       etcd-multinode-515700
	c5de44f1f1066       c7aad43836fa5                                                                                         17 minutes ago      Running             kube-controller-manager   0                   4ae9818227910       kube-controller-manager-multinode-515700
	
	
	==> coredns [15da1b832ef2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 658b75f59357881579d818bea4574a099ffd8bf4e34cb2d6414c381890635887b0895574e607ab48d69c0bc2657640404a00a48de79c5b96ce27f6a68e70a912
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36587 - 14172 "HINFO IN 4725538422205950284.7962128480288568612. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062354244s
	[INFO] 10.244.0.3:46156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244102s
	[INFO] 10.244.0.3:48057 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.210765088s
	[INFO] 10.244.0.3:47676 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15403778s
	[INFO] 10.244.0.3:57534 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.237328274s
	[INFO] 10.244.0.3:38726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000345103s
	[INFO] 10.244.0.3:54844 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.04703092s
	[INFO] 10.244.0.3:51897 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000879808s
	[INFO] 10.244.0.3:57925 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122101s
	[INFO] 10.244.0.3:39997 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012692914s
	[INFO] 10.244.0.3:37301 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000333403s
	[INFO] 10.244.0.3:60294 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172702s
	[INFO] 10.244.0.3:33135 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000250902s
	[INFO] 10.244.0.3:46585 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141701s
	[INFO] 10.244.0.3:41280 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127902s
	[INFO] 10.244.0.3:46602 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000220001s
	[INFO] 10.244.0.3:47802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077001s
	
	
	==> describe nodes <==
	Name:               multinode-515700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:42:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:40:31 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:40:31 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:40:31 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:40:31 +0000   Mon, 29 Apr 2024 20:25:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.241.25
	  Hostname:    multinode-515700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc8de88647d944658545c7ae4a702aea
	  System UUID:                68adc21b-67d2-5446-9537-0dea9fd880a0
	  Boot ID:                    9507eca5-5f1f-4862-974e-a61fb27048d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dv5v8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-drcsj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-multinode-515700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-lt84t                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-multinode-515700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-multinode-515700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-6gx5x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-multinode-515700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node multinode-515700 event: Registered Node multinode-515700 in Controller
	  Normal  NodeReady                16m                kubelet          Node multinode-515700 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 20:24] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.212417] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +31.830340] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.112166] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.613568] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.218400] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.259380] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.863180] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.213718] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.233297] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.301716] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.953055] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.129851] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.793087] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[Apr29 20:25] systemd-fstab-generator[1710]: Ignoring "noauto" option for root device
	[  +0.110579] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.112113] systemd-fstab-generator[2108]: Ignoring "noauto" option for root device
	[  +0.165104] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.220827] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.255309] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.248279] kauditd_printk_skb: 51 callbacks suppressed
	[Apr29 20:26] hrtimer: interrupt took 3466547 ns
	[Apr29 20:29] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [01f30fac305b] <==
	{"level":"info","ts":"2024-04-29T20:25:05.594687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.594905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f received MsgVoteResp from 46980dd3bf48ce1f at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.595201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f became leader at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.595536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46980dd3bf48ce1f elected leader 46980dd3bf48ce1f at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.604545Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.611204Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"46980dd3bf48ce1f","local-member-attributes":"{Name:multinode-515700 ClientURLs:[https://172.17.241.25:2379]}","request-path":"/0/members/46980dd3bf48ce1f/attributes","cluster-id":"abc09309ccc0cb76","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T20:25:05.611653Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:25:05.620024Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.241.25:2379"}
	{"level":"info","ts":"2024-04-29T20:25:05.630573Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:25:05.63137Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T20:25:05.649307Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"abc09309ccc0cb76","local-member-id":"46980dd3bf48ce1f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.651933Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.653346Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.649239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T20:25:05.64915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T20:25:33.808305Z","caller":"traceutil/trace.go:171","msg":"trace[1613443414] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"266.125878ms","start":"2024-04-29T20:25:33.542119Z","end":"2024-04-29T20:25:33.808245Z","steps":["trace[1613443414] 'process raft request'  (duration: 265.820275ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:25:55.320778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.998939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4514"}
	{"level":"info","ts":"2024-04-29T20:25:55.320958Z","caller":"traceutil/trace.go:171","msg":"trace[1653665751] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:466; }","duration":"111.193233ms","start":"2024-04-29T20:25:55.209749Z","end":"2024-04-29T20:25:55.320942Z","steps":["trace[1653665751] 'range keys from in-memory index tree'  (duration: 110.919042ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:26:47.825608Z","caller":"traceutil/trace.go:171","msg":"trace[1666429790] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"149.644884ms","start":"2024-04-29T20:26:47.675822Z","end":"2024-04-29T20:26:47.825467Z","steps":["trace[1666429790] 'process raft request'  (duration: 149.476087ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:35:06.24957Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":689}
	{"level":"info","ts":"2024-04-29T20:35:06.267107Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":689,"took":"17.292815ms","hash":1810199713,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2174976,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-29T20:35:06.267193Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1810199713,"revision":689,"compact-revision":-1}
	{"level":"info","ts":"2024-04-29T20:40:06.283473Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":929}
	{"level":"info","ts":"2024-04-29T20:40:06.293716Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":929,"took":"9.365404ms","hash":2966419944,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-29T20:40:06.293891Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2966419944,"revision":929,"compact-revision":689}
	
	
	==> kernel <==
	 20:42:08 up 19 min,  0 users,  load average: 0.26, 0.39, 0.31
	Linux multinode-515700 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11141cf0a01e] <==
	I0429 20:40:06.224010       1 main.go:227] handling current node
	I0429 20:40:16.230604       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:40:16.230707       1 main.go:227] handling current node
	I0429 20:40:26.243991       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:40:26.244097       1 main.go:227] handling current node
	I0429 20:40:36.249984       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:40:36.250090       1 main.go:227] handling current node
	I0429 20:40:46.260167       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:40:46.260372       1 main.go:227] handling current node
	I0429 20:40:56.275247       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:40:56.275401       1 main.go:227] handling current node
	I0429 20:41:06.281012       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:06.281170       1 main.go:227] handling current node
	I0429 20:41:16.296558       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:16.296671       1 main.go:227] handling current node
	I0429 20:41:26.309655       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:26.310492       1 main.go:227] handling current node
	I0429 20:41:36.316612       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:36.316700       1 main.go:227] handling current node
	I0429 20:41:46.333007       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:46.333112       1 main.go:227] handling current node
	I0429 20:41:56.342898       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:56.343020       1 main.go:227] handling current node
	I0429 20:42:06.358041       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:42:06.358634       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9b9ad8fbed85] <==
	I0429 20:25:08.268566       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 20:25:08.278746       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 20:25:08.278862       1 policy_source.go:224] refreshing policies
	E0429 20:25:08.294082       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 20:25:08.344166       1 controller.go:615] quota admission added evaluator for: namespaces
	E0429 20:25:08.380713       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 20:25:08.456691       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 20:25:09.052862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 20:25:09.062497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 20:25:09.063038       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 20:25:10.434046       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 20:25:10.531926       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 20:25:10.667114       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 20:25:10.682682       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.241.25]
	I0429 20:25:10.685084       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 20:25:10.705095       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 20:25:11.202529       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 20:25:11.660474       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 20:25:11.702512       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 20:25:11.739640       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 20:25:25.195544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 20:25:25.294821       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 20:41:45.603992       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54600: use of closed network connection
	E0429 20:41:46.683622       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54606: use of closed network connection
	E0429 20:41:47.742503       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54616: use of closed network connection
	
	
	==> kube-controller-manager [c5de44f1f106] <==
	I0429 20:25:24.549051       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0429 20:25:24.561849       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 20:25:24.566483       1 shared_informer.go:320] Caches are synced for disruption
	I0429 20:25:24.590460       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 20:25:24.618362       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 20:25:24.656708       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 20:25:25.127753       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 20:25:25.137681       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 20:25:25.137746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 20:25:25.742477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="536.801912ms"
	I0429 20:25:25.820241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.613668ms"
	I0429 20:25:25.820606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.801µs"
	I0429 20:25:26.647122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.452819ms"
	I0429 20:25:26.673190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.454556ms"
	I0429 20:25:26.673366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.301µs"
	I0429 20:25:35.442523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48µs"
	I0429 20:25:35.504302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.901µs"
	I0429 20:25:37.519404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.21268ms"
	I0429 20:25:37.519516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.698µs"
	I0429 20:25:39.495810       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 20:29:47.937478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.419556ms"
	I0429 20:29:47.961915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.36964ms"
	I0429 20:29:47.962862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.499µs"
	I0429 20:29:52.098445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.730146ms"
	I0429 20:29:52.098921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.902µs"
	
	
	==> kube-proxy [8d116812e2fa] <==
	I0429 20:25:27.278575       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:25:27.322396       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.241.25"]
	I0429 20:25:27.381777       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:25:27.381896       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:25:27.381924       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:25:27.389649       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:25:27.392153       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:25:27.392448       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:25:27.396161       1 config.go:192] "Starting service config controller"
	I0429 20:25:27.396372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:25:27.396564       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:25:27.396976       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:25:27.399035       1 config.go:319] "Starting node config controller"
	I0429 20:25:27.399236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:25:27.497521       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:25:27.497518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:25:27.500527       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7748681b165f] <==
	W0429 20:25:09.310708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.311983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.372121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.372287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.389043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:25:09.389975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 20:25:09.402308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.402357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.414781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:25:09.414997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:25:09.463545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:25:09.463684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:25:09.473360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 20:25:09.473524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 20:25:09.538214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 20:25:09.538587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 20:25:09.595918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.596510       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.751697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 20:25:09.752615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 20:25:09.794103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:25:09.794595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 20:25:09.800334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 20:25:09.800494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 20:25:11.092300       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:37:11 multinode-515700 kubelet[2116]: E0429 20:37:11.924329    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:37:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:37:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:37:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:37:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:38:11 multinode-515700 kubelet[2116]: E0429 20:38:11.928823    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:38:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:38:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:38:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:38:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:39:11 multinode-515700 kubelet[2116]: E0429 20:39:11.928961    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:39:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:39:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:39:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:39:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:40:11 multinode-515700 kubelet[2116]: E0429 20:40:11.930322    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:40:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:40:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:40:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:40:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:41:11 multinode-515700 kubelet[2116]: E0429 20:41:11.923434    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:41:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:41:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:41:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:41:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [b26e455e6f82] <==
	I0429 20:25:36.743650       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 20:25:36.787682       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 20:25:36.790227       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 20:25:36.820440       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 20:25:36.822463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-515700_84e09442-fcd9-4e18-9e2f-7318e6322b1c!
	I0429 20:25:36.823363       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0dcda3dc-692f-4183-b089-a530533f9298", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-515700_84e09442-fcd9-4e18-9e2f-7318e6322b1c became leader
	I0429 20:25:36.927070       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-515700_84e09442-fcd9-4e18-9e2f-7318e6322b1c!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:42:00.405988     272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700: (12.3552511s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-515700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-2t4c2
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-515700 describe pod busybox-fc5497c4f-2t4c2
helpers_test.go:282: (dbg) kubectl --context multinode-515700 describe pod busybox-fc5497c4f-2t4c2:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-2t4c2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blkc9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-blkc9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m10s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (755.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (47.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-2t4c2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:572: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-2t4c2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (450.8518ms)

                                                
                                                
** stderr ** 
	W0429 20:42:23.498086    9348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-2t4c2 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:574: Pod busybox-fc5497c4f-2t4c2 could not resolve 'host.minikube.internal': exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-dv5v8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-dv5v8 -- sh -c "ping -c 1 172.17.240.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-515700 -- exec busybox-fc5497c4f-dv5v8 -- sh -c "ping -c 1 172.17.240.1": exit status 1 (10.5393702s)

                                                
                                                
-- stdout --
	PING 172.17.240.1 (172.17.240.1): 56 data bytes
	
	--- 172.17.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:42:24.535212    9476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.240.1) from pod (busybox-fc5497c4f-dv5v8): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700: (12.5954441s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25: (8.8236186s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p multinode-515700                               | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:22 UTC |                     |
	|         | --wait=true --memory=2200                         |                  |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- apply -f                   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:29 UTC | 29 Apr 24 20:29 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- rollout                    | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:29 UTC |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC | 29 Apr 24 20:42 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC | 29 Apr 24 20:42 UTC |
	|         | busybox-fc5497c4f-dv5v8                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC |                     |
	|         | busybox-fc5497c4f-dv5v8 -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.240.1                         |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:22:01
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:22:01.431751    6560 out.go:291] Setting OutFile to fd 1000 ...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.432590    6560 out.go:304] Setting ErrFile to fd 1156...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.463325    6560 out.go:298] Setting JSON to false
	I0429 20:22:01.467738    6560 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24060,"bootTime":1714398060,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 20:22:01.467738    6560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 20:22:01.473386    6560 out.go:177] * [multinode-515700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 20:22:01.477900    6560 notify.go:220] Checking for updates...
	I0429 20:22:01.480328    6560 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:22:01.485602    6560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:22:01.488123    6560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 20:22:01.490657    6560 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:22:01.493249    6560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:22:01.496241    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:22:01.497610    6560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:22:06.930154    6560 out.go:177] * Using the hyperv driver based on user configuration
	I0429 20:22:06.933587    6560 start.go:297] selected driver: hyperv
	I0429 20:22:06.933587    6560 start.go:901] validating driver "hyperv" against <nil>
	I0429 20:22:06.933587    6560 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:22:06.986262    6560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 20:22:06.987723    6560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:22:06.988334    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:22:06.988334    6560 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 20:22:06.988334    6560 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 20:22:06.988334    6560 start.go:340] cluster config:
	{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:22:06.988334    6560 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:22:06.992867    6560 out.go:177] * Starting "multinode-515700" primary control-plane node in "multinode-515700" cluster
	I0429 20:22:06.995976    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:22:06.996499    6560 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 20:22:06.996703    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:22:06.996741    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:22:06.996741    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:22:06.996741    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:22:06.996741    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json: {Name:mkdf346f9e30a055d7c79ffb416c8ce539e0c5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:360] acquireMachinesLock for multinode-515700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700"
	I0429 20:22:06.999081    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:22:06.999081    6560 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 20:22:07.006481    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:22:07.006790    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:22:07.006790    6560 client.go:168] LocalClient.Create starting
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:22:09.217702    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:22:09.217822    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:09.217951    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:22:11.056235    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:12.618512    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:16.461966    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:22:17.019827    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: Creating VM...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:20.140355    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:22:20.140483    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:22.004896    6560 main.go:141] libmachine: Creating VHD
	I0429 20:22:22.004896    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:22:25.795387    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DA11902-3EE7-4F99-A00A-752C0686FD99
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:22:25.796445    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:25.796496    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:22:25.796702    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:22:25.814462    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:22:29.034595    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:29.035273    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:29.035337    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -SizeBytes 20000MB
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:31.671427    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-515700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:35.461856    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700 -DynamicMemoryEnabled $false
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:37.723890    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700 -Count 2
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\boot2docker.iso'
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:42.558432    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd'
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:45.265400    6560 main.go:141] libmachine: Starting VM...
	I0429 20:22:45.265400    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:50.732199    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:50.733048    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:50.733149    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:54.308058    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:56.517062    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:59.110985    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:59.111613    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:00.127675    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:02.349860    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:05.987459    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:08.224322    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:10.790333    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:10.791338    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:11.803237    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:14.061252    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:18.855659    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:23:18.855911    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:21.063683    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:23.697335    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:23.697580    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:23.703285    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:23.713965    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:23.713965    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:23:23.854760    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:23:23.854760    6560 buildroot.go:166] provisioning hostname "multinode-515700"
	I0429 20:23:23.854760    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:26.029157    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:26.029995    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:26.030093    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:28.624899    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:28.625217    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:28.625495    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700 && echo "multinode-515700" | sudo tee /etc/hostname
	I0429 20:23:28.799265    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700
	
	I0429 20:23:28.799376    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:30.924333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:33.588985    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:33.589381    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:33.589381    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:23:33.743242    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:23:33.743242    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:23:33.743242    6560 buildroot.go:174] setting up certificates
	I0429 20:23:33.743242    6560 provision.go:84] configureAuth start
	I0429 20:23:33.743939    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:35.885562    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:38.477298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:40.581307    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:43.165623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:43.165853    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:43.165933    6560 provision.go:143] copyHostCerts
	I0429 20:23:43.166093    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:23:43.166093    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:23:43.166093    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:23:43.166722    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:23:43.168141    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:23:43.168305    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:23:43.168305    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:23:43.168887    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:23:43.169614    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:23:43.170245    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:23:43.170340    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:23:43.170731    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:23:43.171712    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700 san=[127.0.0.1 172.17.241.25 localhost minikube multinode-515700]
	I0429 20:23:43.368646    6560 provision.go:177] copyRemoteCerts
	I0429 20:23:43.382882    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:23:43.382882    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:45.539057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:48.109324    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:23:48.217340    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8343588s)
	I0429 20:23:48.217478    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:23:48.218375    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:23:48.267636    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:23:48.267636    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 20:23:48.316493    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:23:48.316784    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:23:48.372851    6560 provision.go:87] duration metric: took 14.6294509s to configureAuth
	I0429 20:23:48.372952    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:23:48.373086    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:23:48.373086    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:50.522765    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:50.522998    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:50.523146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:53.169650    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:53.170462    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:53.170462    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:23:53.302673    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:23:53.302726    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:23:53.302726    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:23:53.302726    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:55.434984    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:58.060160    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:58.061082    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:58.067077    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:58.068199    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:58.068292    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:23:58.226608    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:23:58.227212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:00.358933    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:02.944293    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:02.944373    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:02.950227    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:02.950958    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:02.950958    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:24:05.224184    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:24:05.224184    6560 machine.go:97] duration metric: took 46.3681587s to provisionDockerMachine
	I0429 20:24:05.224184    6560 client.go:171] duration metric: took 1m58.2164577s to LocalClient.Create
	I0429 20:24:05.224184    6560 start.go:167] duration metric: took 1m58.2164577s to libmachine.API.Create "multinode-515700"
	I0429 20:24:05.224184    6560 start.go:293] postStartSetup for "multinode-515700" (driver="hyperv")
	I0429 20:24:05.224184    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:24:05.241199    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:24:05.241199    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:07.393879    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:09.983789    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:09.984033    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:09.984469    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:10.092254    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8510176s)
	I0429 20:24:10.107982    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:24:10.116700    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:24:10.116700    6560 command_runner.go:130] > ID=buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:24:10.116700    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:24:10.116700    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:24:10.116700    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:24:10.117268    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:24:10.118515    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:24:10.118515    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:24:10.132514    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:24:10.152888    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:24:10.201665    6560 start.go:296] duration metric: took 4.9774423s for postStartSetup
	I0429 20:24:10.204966    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:12.345708    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:12.345785    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:12.345855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:14.957675    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:24:14.960758    6560 start.go:128] duration metric: took 2m7.9606641s to createHost
	I0429 20:24:14.962026    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:17.100197    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:17.100281    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:17.100354    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:19.725196    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:19.725860    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:19.725860    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:24:19.867560    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422259.868914581
	
	I0429 20:24:19.867560    6560 fix.go:216] guest clock: 1714422259.868914581
	I0429 20:24:19.867694    6560 fix.go:229] Guest: 2024-04-29 20:24:19.868914581 +0000 UTC Remote: 2024-04-29 20:24:14.9613787 +0000 UTC m=+133.724240401 (delta=4.907535881s)
	I0429 20:24:19.867694    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:22.005967    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:24.588016    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:24.588016    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:24.588016    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422259
	I0429 20:24:24.741766    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:24:19 UTC 2024
	
	I0429 20:24:24.741837    6560 fix.go:236] clock set: Mon Apr 29 20:24:19 UTC 2024
	 (err=<nil>)
	I0429 20:24:24.741837    6560 start.go:83] releasing machines lock for "multinode-515700", held for 2m17.7427319s
	I0429 20:24:24.742129    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:26.884301    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:29.475377    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:29.476046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:29.480912    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:24:29.481639    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:29.493304    6560 ssh_runner.go:195] Run: cat /version.json
	I0429 20:24:29.493304    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:31.702922    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:34.435635    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.436190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.436258    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.480228    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.481073    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.481135    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.531424    6560 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 20:24:34.531720    6560 ssh_runner.go:235] Completed: cat /version.json: (5.0383759s)
	I0429 20:24:34.545943    6560 ssh_runner.go:195] Run: systemctl --version
	I0429 20:24:34.614256    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:24:34.615354    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1343125s)
	I0429 20:24:34.615354    6560 command_runner.go:130] > systemd 252 (252)
	I0429 20:24:34.615354    6560 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 20:24:34.630005    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:24:34.639051    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 20:24:34.639955    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:24:34.653590    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:24:34.683800    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:24:34.683903    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:24:34.683903    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:34.684139    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:34.720958    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:24:34.735137    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:24:34.769077    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:24:34.791121    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:24:34.804751    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:24:34.838781    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.871052    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:24:34.905043    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.940043    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:24:34.975295    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:24:35.009502    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:24:35.044104    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:24:35.078095    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:24:35.099570    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:24:35.114246    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:24:35.146794    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:35.365920    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:24:35.402710    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:35.417050    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Unit]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:24:35.443946    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:24:35.443946    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:24:35.443946    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Service]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Type=notify
	I0429 20:24:35.443946    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:24:35.443946    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:24:35.443946    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:24:35.443946    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:24:35.443946    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:24:35.443946    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:24:35.443946    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:24:35.443946    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:24:35.443946    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:24:35.443946    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:24:35.443946    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:24:35.443946    6560 command_runner.go:130] > Delegate=yes
	I0429 20:24:35.443946    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:24:35.443946    6560 command_runner.go:130] > KillMode=process
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Install]
	I0429 20:24:35.444947    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:24:35.457957    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.500818    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:24:35.548559    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.585869    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.622879    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:24:35.694256    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.721660    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:35.757211    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:24:35.773795    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:24:35.779277    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:24:35.793892    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:24:35.813834    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:24:35.865638    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:24:36.085117    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:24:36.291781    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:24:36.291781    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:24:36.337637    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:36.567033    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:39.106704    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5396504s)
	I0429 20:24:39.121937    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 20:24:39.164421    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.201973    6560 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 20:24:39.432817    6560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 20:24:39.648494    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:39.872471    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 20:24:39.918782    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.959078    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:40.189711    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 20:24:40.314827    6560 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 20:24:40.327765    6560 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 20:24:40.337989    6560 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 20:24:40.338077    6560 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 20:24:40.338077    6560 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Modify: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Change: 2024-04-29 20:24:40.227771386 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] >  Birth: -
	I0429 20:24:40.338228    6560 start.go:562] Will wait 60s for crictl version
	I0429 20:24:40.353543    6560 ssh_runner.go:195] Run: which crictl
	I0429 20:24:40.359551    6560 command_runner.go:130] > /usr/bin/crictl
	I0429 20:24:40.372542    6560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:24:40.422534    6560 command_runner.go:130] > Version:  0.1.0
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeName:  docker
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 20:24:40.422534    6560 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 20:24:40.433531    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.468470    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.477791    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.510922    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.518057    6560 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 20:24:40.518283    6560 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: 172.17.240.1/20
	I0429 20:24:40.538782    6560 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 20:24:40.546082    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:40.569927    6560 kubeadm.go:877] updating cluster {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:24:40.570125    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:24:40.581034    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:40.605162    6560 docker.go:685] Got preloaded images: 
	I0429 20:24:40.605162    6560 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 20:24:40.617894    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:40.637456    6560 command_runner.go:139] > {"Repositories":{}}
	I0429 20:24:40.652557    6560 ssh_runner.go:195] Run: which lz4
	I0429 20:24:40.659728    6560 command_runner.go:130] > /usr/bin/lz4
	I0429 20:24:40.659728    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 20:24:40.676390    6560 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:24:40.682600    6560 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 20:24:43.151463    6560 docker.go:649] duration metric: took 2.4917153s to copy over tarball
	I0429 20:24:43.166991    6560 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:24:51.777678    6560 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6106197s)
	I0429 20:24:51.777678    6560 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:24:51.848689    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:51.869772    6560 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0429 20:24:51.869772    6560 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 20:24:51.923721    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:52.150884    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:55.504316    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3534062s)
	I0429 20:24:55.515091    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:24:55.540357    6560 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 20:24:55.540357    6560 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:24:55.540557    6560 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 20:24:55.540557    6560 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:24:55.540557    6560 kubeadm.go:928] updating node { 172.17.241.25 8443 v1.30.0 docker true true} ...
	I0429 20:24:55.540557    6560 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-515700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.241.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:24:55.550945    6560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 20:24:55.586940    6560 command_runner.go:130] > cgroupfs
	I0429 20:24:55.587354    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:24:55.587354    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:24:55.587354    6560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:24:55.587354    6560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.241.25 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-515700 NodeName:multinode-515700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.241.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.241.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:24:55.587882    6560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.241.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-515700"
	  kubeletExtraArgs:
	    node-ip: 172.17.241.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.241.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:24:55.601173    6560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubeadm
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubectl
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubelet
	I0429 20:24:55.622022    6560 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:24:55.633924    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:24:55.654273    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 20:24:55.692289    6560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:24:55.726319    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:24:55.774801    6560 ssh_runner.go:195] Run: grep 172.17.241.25	control-plane.minikube.internal$ /etc/hosts
	I0429 20:24:55.781653    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.241.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:55.820570    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:56.051044    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:24:56.087660    6560 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700 for IP: 172.17.241.25
	I0429 20:24:56.087753    6560 certs.go:194] generating shared ca certs ...
	I0429 20:24:56.087824    6560 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 20:24:56.089063    6560 certs.go:256] generating profile certs ...
	I0429 20:24:56.089855    6560 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key
	I0429 20:24:56.089855    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt with IP's: []
	I0429 20:24:56.283640    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt ...
	I0429 20:24:56.284633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt: {Name:mk1286f657dae134d1e4806ec4fc1d780c02f0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.285633    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key ...
	I0429 20:24:56.285633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key: {Name:mka98d4501f3f942abed1092b1c97c4a2bbd30cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.286633    6560 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d
	I0429 20:24:56.287300    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.241.25]
	I0429 20:24:56.456862    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d ...
	I0429 20:24:56.456862    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d: {Name:mk09d828589f59d94791e90fc999c9ce1101118e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d ...
	I0429 20:24:56.458476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d: {Name:mk92ebf0409a99e4a3e3430ff86080f164f4bc0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458796    6560 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt
	I0429 20:24:56.473961    6560 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key
	I0429 20:24:56.474965    6560 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key
	I0429 20:24:56.474965    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt with IP's: []
	I0429 20:24:56.680472    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt ...
	I0429 20:24:56.680472    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt: {Name:mkc600562c7738e3eec9de4025428e3048df463a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.682476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key ...
	I0429 20:24:56.682476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key: {Name:mkc9ba6e1afbc9ca05ce8802b568a72bfd19a90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 20:24:56.685482    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 20:24:56.693323    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 20:24:56.701358    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 20:24:56.702409    6560 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 20:24:56.702718    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 20:24:56.702843    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 20:24:56.705315    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:24:56.758912    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 20:24:56.809584    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:24:56.860874    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:24:56.918708    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:24:56.969377    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:24:57.018903    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:24:57.070438    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:24:57.119823    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:24:57.168671    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 20:24:57.216697    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 20:24:57.263605    6560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:24:57.314590    6560 ssh_runner.go:195] Run: openssl version
	I0429 20:24:57.325614    6560 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 20:24:57.340061    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:24:57.374639    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.394971    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.404667    6560 command_runner.go:130] > b5213941
	I0429 20:24:57.419162    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:24:57.454540    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 20:24:57.494441    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.501867    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.502224    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.517134    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.527174    6560 command_runner.go:130] > 51391683
	I0429 20:24:57.544472    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 20:24:57.579789    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 20:24:57.613535    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622605    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622696    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.637764    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.649176    6560 command_runner.go:130] > 3ec20f2e
	I0429 20:24:57.665410    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:24:57.708796    6560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:24:57.716466    6560 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717133    6560 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717510    6560 kubeadm.go:391] StartCluster: {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:24:57.729105    6560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 20:24:57.771112    6560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 20:24:57.792952    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0429 20:24:57.807601    6560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:24:57.837965    6560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 20:24:57.856820    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:156] found existing configuration files:
	
	I0429 20:24:57.872870    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:24:57.892109    6560 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.892549    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.905782    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:24:57.939062    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:24:57.957607    6560 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.957753    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.972479    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:24:58.006849    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.025918    6560 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.025918    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.039054    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.072026    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:24:58.092314    6560 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.092673    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.105776    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:24:58.124274    6560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:24:58.562957    6560 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:24:58.562957    6560 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:25:12.186137    6560 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186137    6560 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186277    6560 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 20:25:12.186320    6560 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:25:12.186516    6560 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.187085    6560 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.187131    6560 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.190071    6560 out.go:204]   - Generating certificates and keys ...
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190667    6560 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.191251    6560 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191251    6560 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 20:25:12.193040    6560 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193086    6560 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193701    6560 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193701    6560 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.198949    6560 out.go:204]   - Booting up control plane ...
	I0429 20:25:12.193843    6560 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199855    6560 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:25:12.200494    6560 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200494    6560 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201207    6560 command_runner.go:130] > [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.202201    6560 command_runner.go:130] > [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 out.go:204]   - Configuring RBAC rules ...
	I0429 20:25:12.202201    6560 command_runner.go:130] > [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.204361    6560 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.206433    6560 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206433    6560 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206983    6560 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 kubeadm.go:309] 
	I0429 20:25:12.207142    6560 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 kubeadm.go:309] 
	I0429 20:25:12.207365    6560 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207404    6560 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207464    6560 kubeadm.go:309] 
	I0429 20:25:12.207514    6560 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0429 20:25:12.207589    6560 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:25:12.207764    6560 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.207807    6560 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.208030    6560 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 kubeadm.go:309] 
	I0429 20:25:12.208230    6560 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208230    6560 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208281    6560 kubeadm.go:309] 
	I0429 20:25:12.208375    6560 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208375    6560 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208442    6560 kubeadm.go:309] 
	I0429 20:25:12.208643    6560 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0429 20:25:12.208733    6560 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:25:12.208874    6560 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.208936    6560 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.209014    6560 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209014    6560 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209735    6560 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209735    6560 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--control-plane 
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--control-plane 
	I0429 20:25:12.210277    6560 kubeadm.go:309] 
	I0429 20:25:12.210538    6560 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] 
	I0429 20:25:12.210726    6560 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210726    6560 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210937    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:25:12.211197    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:25:12.215717    6560 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 20:25:12.234164    6560 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 20:25:12.242817    6560 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: 2024-04-29 20:23:14.801002600 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Change: 2024-04-29 20:23:06.257000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] >  Birth: -
	I0429 20:25:12.242817    6560 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 20:25:12.242817    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 20:25:12.301387    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 20:25:13.060621    6560 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > serviceaccount/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > daemonset.apps/kindnet created
	I0429 20:25:13.060707    6560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-515700 minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=multinode-515700 minikube.k8s.io/primary=true
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.092072    6560 command_runner.go:130] > -16
	I0429 20:25:13.092113    6560 ops.go:34] apiserver oom_adj: -16
	I0429 20:25:13.290753    6560 command_runner.go:130] > node/multinode-515700 labeled
	I0429 20:25:13.292700    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0429 20:25:13.306335    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.426974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:13.819653    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.947766    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.320587    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.442246    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.822864    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.943107    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.309117    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.432718    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.814070    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.933861    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.317878    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.440680    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.819594    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.942387    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.322995    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.435199    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.809136    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.932465    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.308164    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.429047    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.808817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.928476    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.310090    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.432479    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.815590    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.929079    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.321723    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.442512    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.819466    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.933742    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.309314    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.424974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.811819    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.952603    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.316794    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.432125    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.808890    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.925838    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.310021    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.434432    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.819369    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.948876    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.307817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.457947    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.818980    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.932003    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:25.308659    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:25.488149    6560 command_runner.go:130] > NAME      SECRETS   AGE
	I0429 20:25:25.488217    6560 command_runner.go:130] > default   0         1s
	I0429 20:25:25.489686    6560 kubeadm.go:1107] duration metric: took 12.4288824s to wait for elevateKubeSystemPrivileges
	W0429 20:25:25.489686    6560 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:25:25.489686    6560 kubeadm.go:393] duration metric: took 27.7719601s to StartCluster
	I0429 20:25:25.490694    6560 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.490694    6560 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:25.491677    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.493697    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 20:25:25.493697    6560 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:25:25.498680    6560 out.go:177] * Verifying Kubernetes components...
	I0429 20:25:25.493697    6560 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:25:25.494664    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:25.504657    6560 addons.go:69] Setting storage-provisioner=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:69] Setting default-storageclass=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:234] Setting addon storage-provisioner=true in "multinode-515700"
	I0429 20:25:25.504657    6560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-515700"
	I0429 20:25:25.504657    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.520673    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:25:25.944109    6560 command_runner.go:130] > apiVersion: v1
	I0429 20:25:25.944267    6560 command_runner.go:130] > data:
	I0429 20:25:25.944267    6560 command_runner.go:130] >   Corefile: |
	I0429 20:25:25.944367    6560 command_runner.go:130] >     .:53 {
	I0429 20:25:25.944367    6560 command_runner.go:130] >         errors
	I0429 20:25:25.944367    6560 command_runner.go:130] >         health {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            lameduck 5s
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         ready
	I0429 20:25:25.944367    6560 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            pods insecure
	I0429 20:25:25.944367    6560 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0429 20:25:25.944367    6560 command_runner.go:130] >            ttl 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         prometheus :9153
	I0429 20:25:25.944367    6560 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            max_concurrent 1000
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         cache 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loop
	I0429 20:25:25.944367    6560 command_runner.go:130] >         reload
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loadbalance
	I0429 20:25:25.944367    6560 command_runner.go:130] >     }
	I0429 20:25:25.944367    6560 command_runner.go:130] > kind: ConfigMap
	I0429 20:25:25.944367    6560 command_runner.go:130] > metadata:
	I0429 20:25:25.944367    6560 command_runner.go:130] >   creationTimestamp: "2024-04-29T20:25:11Z"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   name: coredns
	I0429 20:25:25.944367    6560 command_runner.go:130] >   namespace: kube-system
	I0429 20:25:25.944367    6560 command_runner.go:130] >   resourceVersion: "265"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   uid: af2c186a-a14a-4671-8545-05c5ef5d4a89
	I0429 20:25:25.949389    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 20:25:26.023682    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:25:26.408680    6560 command_runner.go:130] > configmap/coredns replaced
	I0429 20:25:26.414254    6560 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.417677    6560 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 20:25:26.417677    6560 node_ready.go:35] waiting up to 6m0s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.435291    6560 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.437034    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.438430    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Audit-Id: a2ae57e5-53a3-4342-ad5c-c2149e87ef04
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438430    6560 round_trippers.go:580]     Audit-Id: 2e6b22a8-9874-417c-a6a5-f7b7437121f7
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438909    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.439086    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:26.440203    6560 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.440298    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.440406    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.440406    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.440519    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:26.440519    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.459913    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.459962    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Audit-Id: 9ca07d91-957f-4992-9642-97b01e07dde3
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.459962    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"393","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.918339    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.918339    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918339    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918339    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.918300    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.918498    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918580    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918580    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Audit-Id: 70383541-35df-461a-b4fb-41bd3b56f11d
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Audit-Id: e628428d-1384-4709-a32e-084c9dfec614
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.929164    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"404","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.929400    6560 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-515700" context rescaled to 1 replicas
	I0429 20:25:26.929400    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.426913    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.426913    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.426913    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.426913    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.430795    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:27.430795    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Audit-Id: e4e6b2b1-e008-4f2a-bae4-3596fce97666
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.430996    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.431340    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.789217    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.789348    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.792426    6560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:25:27.791141    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:27.795103    6560 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:27.795205    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:25:27.795205    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.795205    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:27.795924    6560 addons.go:234] Setting addon default-storageclass=true in "multinode-515700"
	I0429 20:25:27.795924    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:27.796802    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.922993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.923088    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.923175    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.923175    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.929435    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:27.929435    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.929545    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.929545    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.929638    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Audit-Id: 8ef77f9f-d18f-4fa7-ab77-85c137602c84
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.930046    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.432611    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.432611    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.432611    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.432611    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.441320    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:28.441862    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Audit-Id: d32cd9f8-494c-4a69-b028-606c7d354657
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.442308    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.442914    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:28.927674    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.927674    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.927674    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.927897    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.933213    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:28.933794    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Audit-Id: 75d40b2c-c2ed-4221-9361-88591791a649
	I0429 20:25:28.934208    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.422724    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.422898    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.422898    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.422975    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.426431    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:29.426876    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Audit-Id: dde47b6c-069b-408d-a5c6-0a2ea7439643
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.427261    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.918308    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.918308    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.918308    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.918407    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.921072    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:29.921072    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.921072    6560 round_trippers.go:580]     Audit-Id: d4643df6-68ad-4c4c-9604-a5a4d019fba1
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.922076    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.057466    6560 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:30.057636    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:25:30.057750    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:30.145026    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:30.424041    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.424310    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.424310    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.424310    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.428606    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.429051    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.429263    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Audit-Id: 2c59a467-8079-41ed-ac1d-f96dd660d343
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.429435    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.931993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.931993    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.931993    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.931993    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.936635    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.936635    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.937644    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Audit-Id: 9214de5b-8221-4c68-b6b9-a92fe7d41fd1
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.938175    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.939066    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:31.423866    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.423866    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.423866    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.423988    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.427054    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.427827    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.427827    6560 round_trippers.go:580]     Audit-Id: 5f66acb8-ef38-4220-83b6-6e87fbec6f58
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.427869    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:31.932664    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.932664    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.932761    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.932761    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.936680    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.936680    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Audit-Id: f9fb721e-ccaf-4e33-ac69-8ed840761191
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.937009    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.312723    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:32.424680    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.424953    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.424953    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.424953    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.428624    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:32.428906    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.428906    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Audit-Id: d3a39f3a-571d-46c0-a442-edf136da8a11
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.429531    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.858444    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:32.926226    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.926317    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.926393    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.926393    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.929204    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:32.929583    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.929583    6560 round_trippers.go:580]     Audit-Id: 55fc987d-65c0-4ac8-95d2-7fa4185e179b
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.929734    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.930327    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.034553    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:33.425759    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.425833    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.425833    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.425833    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.428624    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.429656    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Audit-Id: d581fce7-8906-48d7-8e13-2d1aba9dec04
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.429916    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.430438    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:33.930984    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.931053    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.931053    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.931053    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.933717    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.933717    6560 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.933717    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.933717    6560 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Audit-Id: 680ed792-db71-4b29-abb9-40f7154e8b1e
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.933717    6560 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0429 20:25:33.933717    6560 command_runner.go:130] > pod/storage-provisioner created
	I0429 20:25:33.933717    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.428102    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.428102    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.428102    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.428102    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.431722    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:34.432624    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Audit-Id: 86cc0608-3000-42b0-9ce8-4223e32d60c3
	I0429 20:25:34.432684    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.433082    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.932029    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.932316    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.932316    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.932316    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.936749    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:34.936749    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Audit-Id: 0e63a4db-3dd4-4e74-8b79-c019b6b97e89
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.937415    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:35.024893    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:35.025151    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:35.025317    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:35.170634    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:35.371184    6560 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0429 20:25:35.371418    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 20:25:35.371571    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.371571    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.371571    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.380781    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:35.381213    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Audit-Id: 31f5e265-3d38-4520-88d0-33f47325189c
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Length: 1273
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.381380    6560 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 20:25:35.382106    6560 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.382183    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 20:25:35.382183    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:35.382269    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.390758    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.390758    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.390758    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Audit-Id: 4dbb716e-2d97-4c38-b342-f63e7d38a4d0
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Length: 1220
	I0429 20:25:35.391190    6560 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.395279    6560 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 20:25:35.397530    6560 addons.go:505] duration metric: took 9.9037568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 20:25:35.421733    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.421733    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.421733    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.421733    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.452743    6560 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 20:25:35.452743    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.452743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Audit-Id: 316d0393-7ba5-4629-87cb-7ae54d0ea965
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.454477    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.455068    6560 node_ready.go:49] node "multinode-515700" has status "Ready":"True"
	I0429 20:25:35.455148    6560 node_ready.go:38] duration metric: took 9.0374019s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:35.455148    6560 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:35.455213    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:35.455213    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.455213    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.455213    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.473128    6560 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 20:25:35.473128    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Audit-Id: 81e159c0-b703-47ba-a9f3-82cc907b8705
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.475820    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"431","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0429 20:25:35.481714    6560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:35.482325    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.482379    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.482379    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.482432    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.491093    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.491093    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Audit-Id: a2eb7ca2-d415-4a7c-a1f0-1ac743bd8f82
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.492090    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.493335    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.493335    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.493335    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.493419    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.496084    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:35.496084    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.496084    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Audit-Id: f61c97ad-ee7a-4666-9244-d7d2091b5d09
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.497097    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.497131    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.497332    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.991323    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.991323    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.991323    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.991323    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.995451    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:35.995451    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Audit-Id: faa8a1a4-279f-4dc3-99c8-8c3b9e9ed746
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:35.996592    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.997239    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.997292    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.997292    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.997292    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.999987    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:35.999987    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Audit-Id: 070c7fff-f707-4b9a-9aef-031cedc68a8c
	I0429 20:25:36.000411    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.483004    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.483004    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.483004    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.483004    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.488152    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:36.488152    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.488152    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.488678    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Audit-Id: fb5cc675-b39d-4cb0-ba8c-24140b3d95e8
	I0429 20:25:36.489818    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.490926    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.490926    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.490985    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.490985    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.494654    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:36.494654    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Audit-Id: fe6d880a-4cf8-4b10-8c7f-debde123eafc
	I0429 20:25:36.495423    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.991643    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.991643    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.991643    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.991855    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.996384    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:36.996384    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Audit-Id: 933a6dd5-a0f7-4380-8189-3e378a8a620d
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:36.997332    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.997760    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.997760    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.997760    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.997760    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.000889    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.000889    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.001211    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Audit-Id: 0342e743-45a6-4fd7-97be-55a766946396
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.001274    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.001759    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.495712    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:37.495712    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.495712    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.495712    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.508671    6560 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 20:25:37.508671    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Audit-Id: d30c6154-a41b-4a0d-976f-d19f40e67223
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.508671    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0429 20:25:37.510663    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.510663    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.510663    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.510663    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.513686    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.513686    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Audit-Id: 397b83a5-95f9-4df8-a76b-042ecc96922c
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.514662    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.514662    6560 pod_ready.go:92] pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.514662    6560 pod_ready.go:81] duration metric: took 2.0329329s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-515700
	I0429 20:25:37.514662    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.514662    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.514662    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.517691    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.517691    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Audit-Id: df53f071-06ed-4797-a51b-7d01b84cac86
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.518412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-515700","namespace":"kube-system","uid":"85f2dc9a-17b5-413c-ab83-d3cbe955571e","resourceVersion":"319","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.241.25:2379","kubernetes.io/config.hash":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.mirror":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.seen":"2024-04-29T20:25:11.718525866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0429 20:25:37.519044    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.519044    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.519124    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.519124    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.521788    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.521788    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Audit-Id: ee5fdb3e-9869-4cd7-996a-a25b453aeb6b
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.521944    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.522769    6560 pod_ready.go:92] pod "etcd-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.522844    6560 pod_ready.go:81] duration metric: took 8.1819ms for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.522844    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.523015    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-515700
	I0429 20:25:37.523015    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.523079    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.523079    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.525575    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.525575    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Audit-Id: cd9d851c-f606-48c9-8da3-3d194ab5464f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.526015    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-515700","namespace":"kube-system","uid":"f5a212eb-87a9-476a-981a-9f31731f39e6","resourceVersion":"312","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.241.25:8443","kubernetes.io/config.hash":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.mirror":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.seen":"2024-04-29T20:25:11.718530866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0429 20:25:37.526356    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.526356    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.526356    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.526356    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.535954    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:37.535954    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Audit-Id: 018aa21f-d408-4777-b54c-eb7aa2295a34
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.536470    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.536974    6560 pod_ready.go:92] pod "kube-apiserver-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.537034    6560 pod_ready.go:81] duration metric: took 14.0881ms for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537034    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537183    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-515700
	I0429 20:25:37.537276    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.537297    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.537297    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.539964    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.539964    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Audit-Id: d3232756-fc07-4b33-a3b5-989d2932cec4
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.541274    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-515700","namespace":"kube-system","uid":"2c9ba563-c2af-45b7-bc1e-bf39759a339b","resourceVersion":"315","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.mirror":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.seen":"2024-04-29T20:25:11.718532066Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0429 20:25:37.541935    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.541935    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.541935    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.541935    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.555960    6560 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 20:25:37.555960    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Audit-Id: 2d117219-3b1a-47fe-99a4-7e5aea7e84d3
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.555960    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.555960    6560 pod_ready.go:92] pod "kube-controller-manager-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.555960    6560 pod_ready.go:81] duration metric: took 18.9251ms for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.555960    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.556943    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gx5x
	I0429 20:25:37.556943    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.556943    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.556943    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.559965    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.560477    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.560477    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Audit-Id: 14e6d1be-eac6-4f20-9621-b409c951fae1
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.560781    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6gx5x","generateName":"kube-proxy-","namespace":"kube-system","uid":"886ac698-7e9b-431b-b822-577331b02f41","resourceVersion":"407","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"027f1d05-009f-4199-81e9-45b0a2d3710f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"027f1d05-009f-4199-81e9-45b0a2d3710f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0429 20:25:37.561552    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.561581    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.561581    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.561581    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.567713    6560 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 20:25:37.567713    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Audit-Id: 678df177-6944-4d30-b889-62528c06bab2
	I0429 20:25:37.567713    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.568391    6560 pod_ready.go:92] pod "kube-proxy-6gx5x" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.568391    6560 pod_ready.go:81] duration metric: took 12.4313ms for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.568391    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.701559    6560 request.go:629] Waited for 132.9214ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701779    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701853    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.701853    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.701853    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.705314    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.706129    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.706129    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.706183    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Audit-Id: 4fb010ad-4d68-4aa0-9ba4-f68d04faa9e8
	I0429 20:25:37.706412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-515700","namespace":"kube-system","uid":"096d3e94-25ba-49b3-b329-6fb47fc88f25","resourceVersion":"334","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.mirror":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.seen":"2024-04-29T20:25:11.718533166Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0429 20:25:37.905204    6560 request.go:629] Waited for 197.8802ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.905322    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.905466    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.909057    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.909159    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Audit-Id: a6cecf7e-83ad-4d5f-8cbb-a65ced7e83ce
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.909286    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.909697    6560 pod_ready.go:92] pod "kube-scheduler-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.909697    6560 pod_ready.go:81] duration metric: took 341.3037ms for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.909697    6560 pod_ready.go:38] duration metric: took 2.4545299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:37.909697    6560 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:25:37.923721    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:25:37.956142    6560 command_runner.go:130] > 2047
	I0429 20:25:37.956226    6560 api_server.go:72] duration metric: took 12.462433s to wait for apiserver process to appear ...
	I0429 20:25:37.956226    6560 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:25:37.956330    6560 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:25:37.965150    6560 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:25:37.965332    6560 round_trippers.go:463] GET https://172.17.241.25:8443/version
	I0429 20:25:37.965364    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.965364    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.965364    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.967124    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:37.967124    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Audit-Id: c3b17e5f-8eb5-4422-bcd1-48cea5831311
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.967423    6560 round_trippers.go:580]     Content-Length: 263
	I0429 20:25:37.967423    6560 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 20:25:37.967530    6560 api_server.go:141] control plane version: v1.30.0
	I0429 20:25:37.967530    6560 api_server.go:131] duration metric: took 11.2306ms to wait for apiserver health ...
	I0429 20:25:37.967629    6560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:25:38.109818    6560 request.go:629] Waited for 142.1878ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110201    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110256    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.110275    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.110275    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.118070    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.118070    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Audit-Id: 557b3073-d14e-4919-8133-995d5b042d22
	I0429 20:25:38.119823    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.123197    6560 system_pods.go:59] 8 kube-system pods found
	I0429 20:25:38.123197    6560 system_pods.go:61] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.123197    6560 system_pods.go:74] duration metric: took 155.566ms to wait for pod list to return data ...
	I0429 20:25:38.123197    6560 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:25:38.295950    6560 request.go:629] Waited for 172.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.296300    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.296300    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.300424    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:38.300424    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Content-Length: 261
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Audit-Id: 7466bf5b-fa07-4a6b-bc06-274738fc9259
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.300674    6560 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"13c4332f-9236-4f04-9e46-f5a98bc3d731","resourceVersion":"343","creationTimestamp":"2024-04-29T20:25:24Z"}}]}
	I0429 20:25:38.300674    6560 default_sa.go:45] found service account: "default"
	I0429 20:25:38.300674    6560 default_sa.go:55] duration metric: took 177.4758ms for default service account to be created ...
	I0429 20:25:38.300674    6560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:25:38.498686    6560 request.go:629] Waited for 197.291ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.498782    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.499005    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.499005    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.499005    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.506756    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.507387    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Audit-Id: ffc5efdb-4263-4450-8ff2-c1bb3f979300
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.507485    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.507503    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.508809    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.512231    6560 system_pods.go:86] 8 kube-system pods found
	I0429 20:25:38.512305    6560 system_pods.go:89] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.512305    6560 system_pods.go:89] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.512451    6560 system_pods.go:89] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.512451    6560 system_pods.go:126] duration metric: took 211.7756ms to wait for k8s-apps to be running ...
	I0429 20:25:38.512451    6560 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:25:38.526027    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:25:38.555837    6560 system_svc.go:56] duration metric: took 43.3852ms WaitForService to wait for kubelet
	I0429 20:25:38.555837    6560 kubeadm.go:576] duration metric: took 13.0620394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:25:38.556007    6560 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:25:38.701455    6560 request.go:629] Waited for 145.1917ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701896    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701917    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.701917    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.702032    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.709221    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.709221    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Audit-Id: 9241b2a0-c483-4bfb-8a19-8f5c9b610b53
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.709221    6560 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0429 20:25:38.710061    6560 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:25:38.710061    6560 node_conditions.go:123] node cpu capacity is 2
	I0429 20:25:38.710061    6560 node_conditions.go:105] duration metric: took 154.0529ms to run NodePressure ...
	I0429 20:25:38.710061    6560 start.go:240] waiting for startup goroutines ...
	I0429 20:25:38.710061    6560 start.go:245] waiting for cluster config update ...
	I0429 20:25:38.710061    6560 start.go:254] writing updated cluster config ...
	I0429 20:25:38.717493    6560 out.go:177] 
	I0429 20:25:38.721129    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.735840    6560 out.go:177] * Starting "multinode-515700-m02" worker node in "multinode-515700" cluster
	I0429 20:25:38.738518    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:25:38.738518    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:25:38.738983    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:25:38.739240    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:25:38.739481    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.745029    6560 start.go:360] acquireMachinesLock for multinode-515700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:25:38.745029    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700-m02"
	I0429 20:25:38.745029    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 20:25:38.745575    6560 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 20:25:38.748852    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:25:38.748852    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:25:38.748852    6560 client.go:168] LocalClient.Create starting
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:40.746212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:25:42.605453    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:47.992432    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:47.992702    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:47.996014    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:25:48.551162    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: Creating VM...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:51.874174    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:25:51.874221    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:53.736899    6560 main.go:141] libmachine: Creating VHD
	I0429 20:25:53.737514    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D65FFD0C-285E-44D0-8723-21544BDDE71A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:25:57.529054    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:00.734035    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -SizeBytes 20000MB
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:03.314283    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-515700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700-m02 -DynamicMemoryEnabled $false
	I0429 20:26:09.480100    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700-m02 -Count 2
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:11.716979    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\boot2docker.iso'
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:14.377298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd'
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:17.090909    6560 main.go:141] libmachine: Starting VM...
	I0429 20:26:17.090909    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700-m02
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:22.527096    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:26.113296    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:28.339433    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:30.953587    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:30.953628    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:31.955478    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:34.197688    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:34.197831    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:34.197901    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:37.817016    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:42.682666    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:42.683603    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:43.685897    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:48.604877    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:48.604915    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:48.604999    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:50.794876    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:50.795093    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:50.795407    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:26:50.795407    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:52.992195    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:52.992243    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:52.992331    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:55.630552    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:55.641728    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:26:55.642758    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:26:55.769182    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:26:55.769182    6560 buildroot.go:166] provisioning hostname "multinode-515700-m02"
	I0429 20:26:55.769333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:57.942857    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:57.943721    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:57.943789    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:00.610012    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:00.610498    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:00.617342    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:00.618022    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:00.618022    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700-m02 && echo "multinode-515700-m02" | sudo tee /etc/hostname
	I0429 20:27:00.774430    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700-m02
	
	I0429 20:27:00.775391    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:02.970796    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:02.971352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:02.971577    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:05.640782    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:05.640782    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:05.640782    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:27:05.779330    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:27:05.779330    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:27:05.779435    6560 buildroot.go:174] setting up certificates
	I0429 20:27:05.779435    6560 provision.go:84] configureAuth start
	I0429 20:27:05.779531    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:07.939785    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:10.607752    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:10.608236    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:10.608319    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:15.428095    6560 provision.go:143] copyHostCerts
	I0429 20:27:15.429066    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:27:15.429066    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:27:15.429066    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:27:15.429626    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:27:15.430936    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:27:15.431366    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:27:15.431366    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:27:15.431875    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:27:15.432822    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:27:15.433064    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:27:15.433064    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:27:15.433498    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:27:15.434807    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700-m02 san=[127.0.0.1 172.17.253.145 localhost minikube multinode-515700-m02]
	I0429 20:27:15.511954    6560 provision.go:177] copyRemoteCerts
	I0429 20:27:15.527105    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:27:15.527105    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:20.368198    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:20.368587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:20.368930    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:20.467819    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9406764s)
	I0429 20:27:20.468832    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:27:20.469887    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:27:20.524889    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:27:20.525559    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 20:27:20.578020    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:27:20.578217    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:27:20.634803    6560 provision.go:87] duration metric: took 14.8552541s to configureAuth
	I0429 20:27:20.634874    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:27:20.635533    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:27:20.635638    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:22.779762    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:25.427345    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:25.427345    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:25.428345    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:27:25.562050    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:27:25.562195    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:27:25.562515    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:27:25.562592    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:30.404141    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:30.405195    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:30.412105    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:30.413171    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:30.413700    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.241.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:27:30.578477    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.241.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:27:30.578477    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:32.772580    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:35.465933    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:35.466426    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:35.466509    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:27:37.701893    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:27:37.701981    6560 machine.go:97] duration metric: took 46.9062133s to provisionDockerMachine
	I0429 20:27:37.702052    6560 client.go:171] duration metric: took 1m58.9522849s to LocalClient.Create
	I0429 20:27:37.702194    6560 start.go:167] duration metric: took 1m58.9524269s to libmachine.API.Create "multinode-515700"
	I0429 20:27:37.702194    6560 start.go:293] postStartSetup for "multinode-515700-m02" (driver="hyperv")
	I0429 20:27:37.702194    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:27:37.716028    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:27:37.716028    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:39.888498    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:39.889355    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:39.889707    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:42.576527    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:42.688245    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9721792s)
	I0429 20:27:42.703472    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:27:42.710185    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:27:42.710391    6560 command_runner.go:130] > ID=buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:27:42.710391    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:27:42.710562    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:27:42.710562    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:27:42.710640    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:27:42.712121    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:27:42.712121    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:27:42.725734    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:27:42.745571    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:27:42.798223    6560 start.go:296] duration metric: took 5.0959902s for postStartSetup
	I0429 20:27:42.801718    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:44.985225    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:47.630520    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:27:47.633045    6560 start.go:128] duration metric: took 2m8.8864784s to createHost
	I0429 20:27:47.633167    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:49.823309    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:49.823412    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:49.823495    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:52.524084    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:52.524183    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:52.530451    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:52.531204    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:52.531204    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:27:52.658970    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422472.660345683
	
	I0429 20:27:52.659208    6560 fix.go:216] guest clock: 1714422472.660345683
	I0429 20:27:52.659208    6560 fix.go:229] Guest: 2024-04-29 20:27:52.660345683 +0000 UTC Remote: 2024-04-29 20:27:47.6330452 +0000 UTC m=+346.394263801 (delta=5.027300483s)
	I0429 20:27:52.659208    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:57.461861    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:57.461927    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:57.467747    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:57.468699    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:57.468699    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422472
	I0429 20:27:57.617018    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:27:52 UTC 2024
	
	I0429 20:27:57.617018    6560 fix.go:236] clock set: Mon Apr 29 20:27:52 UTC 2024
	 (err=<nil>)
	I0429 20:27:57.617018    6560 start.go:83] releasing machines lock for "multinode-515700-m02", held for 2m18.8709228s
	I0429 20:27:57.618122    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:59.795247    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:02.475615    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:02.475867    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:02.479078    6560 out.go:177] * Found network options:
	I0429 20:28:02.481434    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.483990    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.486147    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.488513    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 20:28:02.490094    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.492090    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:28:02.492090    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:02.504078    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:28:02.504078    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:07.440744    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.440938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.441026    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.467629    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.629032    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:28:07.630105    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1379759s)
	I0429 20:28:07.630105    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0429 20:28:07.630229    6560 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1259881s)
	W0429 20:28:07.630229    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:28:07.649597    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:28:07.685721    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:28:07.685954    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:28:07.685954    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:07.686227    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:07.722613    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:28:07.736060    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:28:07.771561    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:28:07.793500    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:28:07.809715    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:28:07.846242    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.882404    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:28:07.918280    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.956186    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:28:07.994072    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:28:08.029701    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:28:08.067417    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:28:08.104772    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:28:08.126209    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:28:08.140685    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:28:08.181598    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:08.410362    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:28:08.449856    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:08.466974    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Unit]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:28:08.492900    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:28:08.492900    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:28:08.492900    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Service]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Type=notify
	I0429 20:28:08.492900    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:28:08.492900    6560 command_runner.go:130] > Environment=NO_PROXY=172.17.241.25
	I0429 20:28:08.492900    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:28:08.492900    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:28:08.492900    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:28:08.492900    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:28:08.492900    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:28:08.492900    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:28:08.492900    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:28:08.492900    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:28:08.492900    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:28:08.493891    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:28:08.493891    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:28:08.493891    6560 command_runner.go:130] > Delegate=yes
	I0429 20:28:08.493891    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:28:08.493891    6560 command_runner.go:130] > KillMode=process
	I0429 20:28:08.493891    6560 command_runner.go:130] > [Install]
	I0429 20:28:08.493891    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:28:08.505928    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.548562    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:28:08.606977    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.652185    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.695349    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:28:08.785230    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.816602    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:08.853434    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:28:08.870019    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:28:08.876256    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:28:08.890247    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:28:08.911471    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:28:08.962890    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:28:09.201152    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:28:09.397561    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:28:09.398166    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:28:09.451159    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:09.673084    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:29:10.809648    6560 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 20:29:10.809648    6560 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 20:29:10.809648    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1361028s)
	I0429 20:29:10.827248    6560 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 20:29:10.852173    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 20:29:10.852279    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 20:29:10.852319    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 20:29:10.852739    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854100    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854118    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854142    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854175    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 20:29:10.865335    6560 out.go:177] 
	W0429 20:29:10.865335    6560 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 20:29:10.865335    6560 out.go:239] * 
	W0429 20:29:10.869400    6560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:29:10.876700    6560 out.go:177] 
	
	
	==> Docker <==
	Apr 29 20:29:32 multinode-515700 dockerd[1325]: 2024/04/29 20:29:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:33 multinode-515700 dockerd[1325]: 2024/04/29 20:29:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.311678535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.311805235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.311843635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.314238729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:49 multinode-515700 cri-dockerd[1230]: time="2024-04-29T20:29:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1a58f6d29ec95da5888905a6941e048b2c50f12c8ae76975e21ae109c16a8bb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 20:29:50 multinode-515700 cri-dockerd[1230]: time="2024-04-29T20:29:50Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.935705225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.935856331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.935874732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.936415956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	32c6f043cec2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Running             busybox                   0                   e1a58f6d29ec9       busybox-fc5497c4f-dv5v8
	15da1b832ef20       cbb01a7bd410d                                                                                         17 minutes ago      Running             coredns                   0                   73ab97e30d3d0       coredns-7db6d8ff4d-drcsj
	b26e455e6f823       6e38f40d628db                                                                                         17 minutes ago      Running             storage-provisioner       0                   0274116a036cf       storage-provisioner
	11141cf0a01e5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              17 minutes ago      Running             kindnet-cni               0                   5c226cf922db1       kindnet-lt84t
	8d116812e2fa7       a0bf559e280cf                                                                                         17 minutes ago      Running             kube-proxy                0                   c4e88976a7bb5       kube-proxy-6gx5x
	9b9ad8fbed853       c42f13656d0b2                                                                                         17 minutes ago      Running             kube-apiserver            0                   e1040c321d522       kube-apiserver-multinode-515700
	7748681b165fb       259c8277fcbbc                                                                                         17 minutes ago      Running             kube-scheduler            0                   ab47450efbe05       kube-scheduler-multinode-515700
	01f30fac305bc       3861cfcd7c04c                                                                                         17 minutes ago      Running             etcd                      0                   b5202cca492c4       etcd-multinode-515700
	c5de44f1f1066       c7aad43836fa5                                                                                         17 minutes ago      Running             kube-controller-manager   0                   4ae9818227910       kube-controller-manager-multinode-515700
	
	
	==> coredns [15da1b832ef2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 658b75f59357881579d818bea4574a099ffd8bf4e34cb2d6414c381890635887b0895574e607ab48d69c0bc2657640404a00a48de79c5b96ce27f6a68e70a912
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36587 - 14172 "HINFO IN 4725538422205950284.7962128480288568612. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062354244s
	[INFO] 10.244.0.3:46156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244102s
	[INFO] 10.244.0.3:48057 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.210765088s
	[INFO] 10.244.0.3:47676 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15403778s
	[INFO] 10.244.0.3:57534 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.237328274s
	[INFO] 10.244.0.3:38726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000345103s
	[INFO] 10.244.0.3:54844 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.04703092s
	[INFO] 10.244.0.3:51897 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000879808s
	[INFO] 10.244.0.3:57925 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122101s
	[INFO] 10.244.0.3:39997 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012692914s
	[INFO] 10.244.0.3:37301 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000333403s
	[INFO] 10.244.0.3:60294 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172702s
	[INFO] 10.244.0.3:33135 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000250902s
	[INFO] 10.244.0.3:46585 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141701s
	[INFO] 10.244.0.3:41280 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127902s
	[INFO] 10.244.0.3:46602 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000220001s
	[INFO] 10.244.0.3:47802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077001s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	[INFO] 10.244.0.3:45741 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166201s
	[INFO] 10.244.0.3:48683 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	[INFO] 10.244.0.3:47252 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159702s
	
	
	==> describe nodes <==
	Name:               multinode-515700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:42:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:40:31 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:40:31 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:40:31 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:40:31 +0000   Mon, 29 Apr 2024 20:25:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.241.25
	  Hostname:    multinode-515700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc8de88647d944658545c7ae4a702aea
	  System UUID:                68adc21b-67d2-5446-9537-0dea9fd880a0
	  Boot ID:                    9507eca5-5f1f-4862-974e-a61fb27048d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dv5v8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-drcsj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-multinode-515700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-lt84t                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-multinode-515700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-multinode-515700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-6gx5x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-multinode-515700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node multinode-515700 event: Registered Node multinode-515700 in Controller
	  Normal  NodeReady                17m                kubelet          Node multinode-515700 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 20:24] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.212417] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +31.830340] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.112166] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.613568] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.218400] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.259380] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.863180] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.213718] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.233297] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.301716] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.953055] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.129851] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.793087] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[Apr29 20:25] systemd-fstab-generator[1710]: Ignoring "noauto" option for root device
	[  +0.110579] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.112113] systemd-fstab-generator[2108]: Ignoring "noauto" option for root device
	[  +0.165104] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.220827] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.255309] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.248279] kauditd_printk_skb: 51 callbacks suppressed
	[Apr29 20:26] hrtimer: interrupt took 3466547 ns
	[Apr29 20:29] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [01f30fac305b] <==
	{"level":"info","ts":"2024-04-29T20:25:05.594687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.594905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f received MsgVoteResp from 46980dd3bf48ce1f at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.595201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46980dd3bf48ce1f became leader at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.595536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46980dd3bf48ce1f elected leader 46980dd3bf48ce1f at term 2"}
	{"level":"info","ts":"2024-04-29T20:25:05.604545Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.611204Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"46980dd3bf48ce1f","local-member-attributes":"{Name:multinode-515700 ClientURLs:[https://172.17.241.25:2379]}","request-path":"/0/members/46980dd3bf48ce1f/attributes","cluster-id":"abc09309ccc0cb76","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T20:25:05.611653Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:25:05.620024Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.241.25:2379"}
	{"level":"info","ts":"2024-04-29T20:25:05.630573Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:25:05.63137Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T20:25:05.649307Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"abc09309ccc0cb76","local-member-id":"46980dd3bf48ce1f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.651933Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.653346Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:25:05.649239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T20:25:05.64915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T20:25:33.808305Z","caller":"traceutil/trace.go:171","msg":"trace[1613443414] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"266.125878ms","start":"2024-04-29T20:25:33.542119Z","end":"2024-04-29T20:25:33.808245Z","steps":["trace[1613443414] 'process raft request'  (duration: 265.820275ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:25:55.320778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.998939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4514"}
	{"level":"info","ts":"2024-04-29T20:25:55.320958Z","caller":"traceutil/trace.go:171","msg":"trace[1653665751] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:466; }","duration":"111.193233ms","start":"2024-04-29T20:25:55.209749Z","end":"2024-04-29T20:25:55.320942Z","steps":["trace[1653665751] 'range keys from in-memory index tree'  (duration: 110.919042ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:26:47.825608Z","caller":"traceutil/trace.go:171","msg":"trace[1666429790] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"149.644884ms","start":"2024-04-29T20:26:47.675822Z","end":"2024-04-29T20:26:47.825467Z","steps":["trace[1666429790] 'process raft request'  (duration: 149.476087ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:35:06.24957Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":689}
	{"level":"info","ts":"2024-04-29T20:35:06.267107Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":689,"took":"17.292815ms","hash":1810199713,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2174976,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-29T20:35:06.267193Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1810199713,"revision":689,"compact-revision":-1}
	{"level":"info","ts":"2024-04-29T20:40:06.283473Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":929}
	{"level":"info","ts":"2024-04-29T20:40:06.293716Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":929,"took":"9.365404ms","hash":2966419944,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-29T20:40:06.293891Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2966419944,"revision":929,"compact-revision":689}
	
	
	==> kernel <==
	 20:42:55 up 19 min,  0 users,  load average: 0.34, 0.40, 0.32
	Linux multinode-515700 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11141cf0a01e] <==
	I0429 20:40:46.260372       1 main.go:227] handling current node
	I0429 20:40:56.275247       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:40:56.275401       1 main.go:227] handling current node
	I0429 20:41:06.281012       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:06.281170       1 main.go:227] handling current node
	I0429 20:41:16.296558       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:16.296671       1 main.go:227] handling current node
	I0429 20:41:26.309655       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:26.310492       1 main.go:227] handling current node
	I0429 20:41:36.316612       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:36.316700       1 main.go:227] handling current node
	I0429 20:41:46.333007       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:46.333112       1 main.go:227] handling current node
	I0429 20:41:56.342898       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:41:56.343020       1 main.go:227] handling current node
	I0429 20:42:06.358041       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:42:06.358634       1 main.go:227] handling current node
	I0429 20:42:16.365765       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:42:16.365883       1 main.go:227] handling current node
	I0429 20:42:26.377170       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:42:26.377430       1 main.go:227] handling current node
	I0429 20:42:36.386530       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:42:36.386648       1 main.go:227] handling current node
	I0429 20:42:46.395132       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:42:46.395310       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9b9ad8fbed85] <==
	I0429 20:25:08.278862       1 policy_source.go:224] refreshing policies
	E0429 20:25:08.294082       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 20:25:08.344166       1 controller.go:615] quota admission added evaluator for: namespaces
	E0429 20:25:08.380713       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 20:25:08.456691       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 20:25:09.052862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 20:25:09.062497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 20:25:09.063038       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 20:25:10.434046       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 20:25:10.531926       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 20:25:10.667114       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 20:25:10.682682       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.241.25]
	I0429 20:25:10.685084       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 20:25:10.705095       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 20:25:11.202529       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 20:25:11.660474       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 20:25:11.702512       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 20:25:11.739640       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 20:25:25.195544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 20:25:25.294821       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 20:41:45.603992       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54600: use of closed network connection
	E0429 20:41:46.683622       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54606: use of closed network connection
	E0429 20:41:47.742503       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54616: use of closed network connection
	E0429 20:42:24.359204       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54636: use of closed network connection
	E0429 20:42:34.907983       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54638: use of closed network connection
	
	
	==> kube-controller-manager [c5de44f1f106] <==
	I0429 20:25:24.549051       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0429 20:25:24.561849       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 20:25:24.566483       1 shared_informer.go:320] Caches are synced for disruption
	I0429 20:25:24.590460       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 20:25:24.618362       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 20:25:24.656708       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 20:25:25.127753       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 20:25:25.137681       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 20:25:25.137746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 20:25:25.742477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="536.801912ms"
	I0429 20:25:25.820241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.613668ms"
	I0429 20:25:25.820606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.801µs"
	I0429 20:25:26.647122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.452819ms"
	I0429 20:25:26.673190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.454556ms"
	I0429 20:25:26.673366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.301µs"
	I0429 20:25:35.442523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48µs"
	I0429 20:25:35.504302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.901µs"
	I0429 20:25:37.519404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.21268ms"
	I0429 20:25:37.519516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.698µs"
	I0429 20:25:39.495810       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 20:29:47.937478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.419556ms"
	I0429 20:29:47.961915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.36964ms"
	I0429 20:29:47.962862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.499µs"
	I0429 20:29:52.098445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.730146ms"
	I0429 20:29:52.098921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.902µs"
	
	
	==> kube-proxy [8d116812e2fa] <==
	I0429 20:25:27.278575       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:25:27.322396       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.241.25"]
	I0429 20:25:27.381777       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:25:27.381896       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:25:27.381924       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:25:27.389649       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:25:27.392153       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:25:27.392448       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:25:27.396161       1 config.go:192] "Starting service config controller"
	I0429 20:25:27.396372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:25:27.396564       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:25:27.396976       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:25:27.399035       1 config.go:319] "Starting node config controller"
	I0429 20:25:27.399236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:25:27.497521       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:25:27.497518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:25:27.500527       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7748681b165f] <==
	W0429 20:25:09.310708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.311983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.372121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.372287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.389043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:25:09.389975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 20:25:09.402308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.402357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.414781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:25:09.414997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:25:09.463545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:25:09.463684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:25:09.473360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 20:25:09.473524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 20:25:09.538214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 20:25:09.538587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 20:25:09.595918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.596510       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.751697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 20:25:09.752615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 20:25:09.794103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:25:09.794595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 20:25:09.800334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 20:25:09.800494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 20:25:11.092300       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:38:11 multinode-515700 kubelet[2116]: E0429 20:38:11.928823    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:38:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:38:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:38:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:38:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:39:11 multinode-515700 kubelet[2116]: E0429 20:39:11.928961    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:39:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:39:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:39:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:39:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:40:11 multinode-515700 kubelet[2116]: E0429 20:40:11.930322    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:40:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:40:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:40:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:40:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:41:11 multinode-515700 kubelet[2116]: E0429 20:41:11.923434    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:41:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:41:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:41:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:41:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:42:11 multinode-515700 kubelet[2116]: E0429 20:42:11.922919    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:42:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:42:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:42:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:42:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [b26e455e6f82] <==
	I0429 20:25:36.743650       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 20:25:36.787682       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 20:25:36.790227       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 20:25:36.820440       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 20:25:36.822463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-515700_84e09442-fcd9-4e18-9e2f-7318e6322b1c!
	I0429 20:25:36.823363       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0dcda3dc-692f-4183-b089-a530533f9298", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-515700_84e09442-fcd9-4e18-9e2f-7318e6322b1c became leader
	I0429 20:25:36.927070       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-515700_84e09442-fcd9-4e18-9e2f-7318e6322b1c!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:42:47.666818     780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700: (12.3792187s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-515700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E0429 20:43:10.221761   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
helpers_test.go:272: non-running pods: busybox-fc5497c4f-2t4c2
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-515700 describe pod busybox-fc5497c4f-2t4c2
helpers_test.go:282: (dbg) kubectl --context multinode-515700 describe pod busybox-fc5497c4f-2t4c2:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-2t4c2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blkc9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-blkc9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m58s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (47.59s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (271s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-515700 -v 3 --alsologtostderr
E0429 20:43:27.240255   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 20:45:24.001189   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-515700 -v 3 --alsologtostderr: (3m19.5785321s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status --alsologtostderr
multinode_test.go:127: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status --alsologtostderr: exit status 2 (36.4674223s)

                                                
                                                
-- stdout --
	multinode-515700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-515700-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-515700-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:46:30.190036    1364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 20:46:30.285988    1364 out.go:291] Setting OutFile to fd 1880 ...
	I0429 20:46:30.286931    1364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:46:30.286931    1364 out.go:304] Setting ErrFile to fd 1916...
	I0429 20:46:30.286931    1364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:46:30.303826    1364 out.go:298] Setting JSON to false
	I0429 20:46:30.303826    1364 mustload.go:65] Loading cluster: multinode-515700
	I0429 20:46:30.303826    1364 notify.go:220] Checking for updates...
	I0429 20:46:30.304761    1364 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:46:30.304761    1364 status.go:255] checking status of multinode-515700 ...
	I0429 20:46:30.305960    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:46:32.502641    1364 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:46:32.503208    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:32.503286    1364 status.go:330] multinode-515700 host status = "Running" (err=<nil>)
	I0429 20:46:32.503324    1364 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:46:32.504295    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:46:34.736337    1364 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:46:34.737065    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:34.737160    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:46:37.431212    1364 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:46:37.431320    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:37.431320    1364 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:46:37.446666    1364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 20:46:37.446666    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:46:39.607574    1364 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:46:39.607574    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:39.607805    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:46:42.244212    1364 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:46:42.245125    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:42.245403    1364 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:46:42.350503    1364 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9038013s)
	I0429 20:46:42.365081    1364 ssh_runner.go:195] Run: systemctl --version
	I0429 20:46:42.389158    1364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:46:42.416119    1364 kubeconfig.go:125] found "multinode-515700" server: "https://172.17.241.25:8443"
	I0429 20:46:42.416119    1364 api_server.go:166] Checking apiserver status ...
	I0429 20:46:42.428111    1364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:46:42.470372    1364 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup
	W0429 20:46:42.492019    1364 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:46:42.504804    1364 ssh_runner.go:195] Run: ls
	I0429 20:46:42.513391    1364 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:46:42.521439    1364 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:46:42.521439    1364 status.go:422] multinode-515700 apiserver status = Running (err=<nil>)
	I0429 20:46:42.521439    1364 status.go:257] multinode-515700 status: &{Name:multinode-515700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 20:46:42.521855    1364 status.go:255] checking status of multinode-515700-m02 ...
	I0429 20:46:42.522641    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:46:44.674167    1364 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:46:44.674895    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:44.674895    1364 status.go:330] multinode-515700-m02 host status = "Running" (err=<nil>)
	I0429 20:46:44.674895    1364 host.go:66] Checking if "multinode-515700-m02" exists ...
	I0429 20:46:44.675742    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:46:46.898794    1364 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:46:46.898794    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:46.898794    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:46:49.511131    1364 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:46:49.511131    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:49.511697    1364 host.go:66] Checking if "multinode-515700-m02" exists ...
	I0429 20:46:49.526620    1364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 20:46:49.526620    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:46:51.665882    1364 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:46:51.665882    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:51.666765    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:46:54.263603    1364 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:46:54.263774    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:54.264190    1364 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:46:54.360354    1364 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8336985s)
	I0429 20:46:54.373989    1364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:46:54.399713    1364 status.go:257] multinode-515700-m02 status: &{Name:multinode-515700-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 20:46:54.399810    1364 status.go:255] checking status of multinode-515700-m03 ...
	I0429 20:46:54.400739    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:46:56.595960    1364 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:46:56.596172    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:56.596261    1364 status.go:330] multinode-515700-m03 host status = "Running" (err=<nil>)
	I0429 20:46:56.596261    1364 host.go:66] Checking if "multinode-515700-m03" exists ...
	I0429 20:46:56.596461    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:46:58.794873    1364 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:46:58.795581    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:46:58.796005    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:47:01.486053    1364 main.go:141] libmachine: [stdout =====>] : 172.17.240.210
	
	I0429 20:47:01.486745    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:47:01.486825    1364 host.go:66] Checking if "multinode-515700-m03" exists ...
	I0429 20:47:01.502798    1364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 20:47:01.502798    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:47:03.705564    1364 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:47:03.705910    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:47:03.705910    1364 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:47:06.344098    1364 main.go:141] libmachine: [stdout =====>] : 172.17.240.210
	
	I0429 20:47:06.344098    1364 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:47:06.344920    1364 sshutil.go:53] new ssh client: &{IP:172.17.240.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m03\id_rsa Username:docker}
	I0429 20:47:06.446338    1364 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9435046s)
	I0429 20:47:06.460712    1364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:47:06.488672    1364 status.go:257] multinode-515700-m03 status: &{Name:multinode-515700-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:129: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-515700 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700: (12.3597488s)
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25: (8.6987303s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-515700 -- apply -f                   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:29 UTC | 29 Apr 24 20:29 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- rollout                    | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:29 UTC |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC | 29 Apr 24 20:42 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC | 29 Apr 24 20:42 UTC |
	|         | busybox-fc5497c4f-dv5v8                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC |                     |
	|         | busybox-fc5497c4f-dv5v8 -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.240.1                         |                  |                   |         |                     |                     |
	| node    | add -p multinode-515700 -v 3                      | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:43 UTC | 29 Apr 24 20:46 UTC |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:22:01
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:22:01.431751    6560 out.go:291] Setting OutFile to fd 1000 ...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.432590    6560 out.go:304] Setting ErrFile to fd 1156...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.463325    6560 out.go:298] Setting JSON to false
	I0429 20:22:01.467738    6560 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24060,"bootTime":1714398060,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 20:22:01.467738    6560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 20:22:01.473386    6560 out.go:177] * [multinode-515700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 20:22:01.477900    6560 notify.go:220] Checking for updates...
	I0429 20:22:01.480328    6560 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:22:01.485602    6560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:22:01.488123    6560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 20:22:01.490657    6560 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:22:01.493249    6560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:22:01.496241    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:22:01.497610    6560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:22:06.930154    6560 out.go:177] * Using the hyperv driver based on user configuration
	I0429 20:22:06.933587    6560 start.go:297] selected driver: hyperv
	I0429 20:22:06.933587    6560 start.go:901] validating driver "hyperv" against <nil>
	I0429 20:22:06.933587    6560 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:22:06.986262    6560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 20:22:06.987723    6560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:22:06.988334    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:22:06.988334    6560 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 20:22:06.988334    6560 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 20:22:06.988334    6560 start.go:340] cluster config:
	{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:22:06.988334    6560 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:22:06.992867    6560 out.go:177] * Starting "multinode-515700" primary control-plane node in "multinode-515700" cluster
	I0429 20:22:06.995976    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:22:06.996499    6560 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 20:22:06.996703    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:22:06.996741    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:22:06.996741    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:22:06.996741    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:22:06.996741    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json: {Name:mkdf346f9e30a055d7c79ffb416c8ce539e0c5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:360] acquireMachinesLock for multinode-515700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700"
	I0429 20:22:06.999081    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:22:06.999081    6560 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 20:22:07.006481    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:22:07.006790    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:22:07.006790    6560 client.go:168] LocalClient.Create starting
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:22:09.217702    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:22:09.217822    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:09.217951    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:22:11.056235    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:12.618512    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:16.461966    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:22:17.019827    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: Creating VM...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:20.140355    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:22:20.140483    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:22.004896    6560 main.go:141] libmachine: Creating VHD
	I0429 20:22:22.004896    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:22:25.795387    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DA11902-3EE7-4F99-A00A-752C0686FD99
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:22:25.796445    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:25.796496    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:22:25.796702    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:22:25.814462    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:22:29.034595    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:29.035273    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:29.035337    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -SizeBytes 20000MB
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:31.671427    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-515700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:35.461856    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700 -DynamicMemoryEnabled $false
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:37.723890    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700 -Count 2
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\boot2docker.iso'
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:42.558432    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd'
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:45.265400    6560 main.go:141] libmachine: Starting VM...
	I0429 20:22:45.265400    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:50.732199    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:50.733048    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:50.733149    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:54.308058    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:56.517062    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:59.110985    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:59.111613    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:00.127675    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:02.349860    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:05.987459    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:08.224322    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:10.790333    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:10.791338    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:11.803237    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:14.061252    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:18.855659    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:23:18.855911    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:21.063683    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:23.697335    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:23.697580    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:23.703285    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:23.713965    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:23.713965    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:23:23.854760    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:23:23.854760    6560 buildroot.go:166] provisioning hostname "multinode-515700"
	I0429 20:23:23.854760    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:26.029157    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:26.029995    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:26.030093    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:28.624899    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:28.625217    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:28.625495    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700 && echo "multinode-515700" | sudo tee /etc/hostname
	I0429 20:23:28.799265    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700
	
	I0429 20:23:28.799376    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:30.924333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:33.588985    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:33.589381    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:33.589381    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:23:33.743242    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:23:33.743242    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:23:33.743242    6560 buildroot.go:174] setting up certificates
	I0429 20:23:33.743242    6560 provision.go:84] configureAuth start
	I0429 20:23:33.743939    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:35.885562    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:38.477298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:40.581307    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:43.165623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:43.165853    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:43.165933    6560 provision.go:143] copyHostCerts
	I0429 20:23:43.166093    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:23:43.166093    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:23:43.166093    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:23:43.166722    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:23:43.168141    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:23:43.168305    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:23:43.168305    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:23:43.168887    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:23:43.169614    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:23:43.170245    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:23:43.170340    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:23:43.170731    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:23:43.171712    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700 san=[127.0.0.1 172.17.241.25 localhost minikube multinode-515700]
	I0429 20:23:43.368646    6560 provision.go:177] copyRemoteCerts
	I0429 20:23:43.382882    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:23:43.382882    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:45.539057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:48.109324    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:23:48.217340    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8343588s)
	I0429 20:23:48.217478    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:23:48.218375    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:23:48.267636    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:23:48.267636    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 20:23:48.316493    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:23:48.316784    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:23:48.372851    6560 provision.go:87] duration metric: took 14.6294509s to configureAuth
	I0429 20:23:48.372952    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:23:48.373086    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:23:48.373086    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:50.522765    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:50.522998    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:50.523146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:53.169650    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:53.170462    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:53.170462    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:23:53.302673    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:23:53.302726    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:23:53.302726    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:23:53.302726    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:55.434984    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:58.060160    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:58.061082    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:58.067077    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:58.068199    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:58.068292    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:23:58.226608    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:23:58.227212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:00.358933    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:02.944293    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:02.944373    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:02.950227    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:02.950958    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:02.950958    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:24:05.224184    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:24:05.224184    6560 machine.go:97] duration metric: took 46.3681587s to provisionDockerMachine
	I0429 20:24:05.224184    6560 client.go:171] duration metric: took 1m58.2164577s to LocalClient.Create
	I0429 20:24:05.224184    6560 start.go:167] duration metric: took 1m58.2164577s to libmachine.API.Create "multinode-515700"
	I0429 20:24:05.224184    6560 start.go:293] postStartSetup for "multinode-515700" (driver="hyperv")
	I0429 20:24:05.224184    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:24:05.241199    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:24:05.241199    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:07.393879    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:09.983789    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:09.984033    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:09.984469    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:10.092254    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8510176s)
	I0429 20:24:10.107982    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:24:10.116700    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:24:10.116700    6560 command_runner.go:130] > ID=buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:24:10.116700    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:24:10.116700    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:24:10.116700    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:24:10.117268    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:24:10.118515    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:24:10.118515    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:24:10.132514    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:24:10.152888    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:24:10.201665    6560 start.go:296] duration metric: took 4.9774423s for postStartSetup
	I0429 20:24:10.204966    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:12.345708    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:12.345785    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:12.345855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:14.957675    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:24:14.960758    6560 start.go:128] duration metric: took 2m7.9606641s to createHost
	I0429 20:24:14.962026    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:17.100197    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:17.100281    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:17.100354    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:19.725196    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:19.725860    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:19.725860    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:24:19.867560    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422259.868914581
	
	I0429 20:24:19.867560    6560 fix.go:216] guest clock: 1714422259.868914581
	I0429 20:24:19.867694    6560 fix.go:229] Guest: 2024-04-29 20:24:19.868914581 +0000 UTC Remote: 2024-04-29 20:24:14.9613787 +0000 UTC m=+133.724240401 (delta=4.907535881s)
	I0429 20:24:19.867694    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:22.005967    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:24.588016    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:24.588016    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:24.588016    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422259
	I0429 20:24:24.741766    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:24:19 UTC 2024
	
	I0429 20:24:24.741837    6560 fix.go:236] clock set: Mon Apr 29 20:24:19 UTC 2024
	 (err=<nil>)
	I0429 20:24:24.741837    6560 start.go:83] releasing machines lock for "multinode-515700", held for 2m17.7427319s
	I0429 20:24:24.742129    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:26.884301    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:29.475377    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:29.476046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:29.480912    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:24:29.481639    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:29.493304    6560 ssh_runner.go:195] Run: cat /version.json
	I0429 20:24:29.493304    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:31.702922    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:34.435635    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.436190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.436258    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.480228    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.481073    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.481135    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.531424    6560 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 20:24:34.531720    6560 ssh_runner.go:235] Completed: cat /version.json: (5.0383759s)
	I0429 20:24:34.545943    6560 ssh_runner.go:195] Run: systemctl --version
	I0429 20:24:34.614256    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:24:34.615354    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1343125s)
	I0429 20:24:34.615354    6560 command_runner.go:130] > systemd 252 (252)
	I0429 20:24:34.615354    6560 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 20:24:34.630005    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:24:34.639051    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 20:24:34.639955    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:24:34.653590    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:24:34.683800    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:24:34.683903    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:24:34.683903    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:34.684139    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:34.720958    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:24:34.735137    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:24:34.769077    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:24:34.791121    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:24:34.804751    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:24:34.838781    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.871052    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:24:34.905043    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.940043    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:24:34.975295    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:24:35.009502    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:24:35.044104    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:24:35.078095    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:24:35.099570    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:24:35.114246    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:24:35.146794    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:35.365920    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:24:35.402710    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:35.417050    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Unit]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:24:35.443946    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:24:35.443946    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:24:35.443946    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Service]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Type=notify
	I0429 20:24:35.443946    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:24:35.443946    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:24:35.443946    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:24:35.443946    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:24:35.443946    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:24:35.443946    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:24:35.443946    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:24:35.443946    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:24:35.443946    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:24:35.443946    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:24:35.443946    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:24:35.443946    6560 command_runner.go:130] > Delegate=yes
	I0429 20:24:35.443946    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:24:35.443946    6560 command_runner.go:130] > KillMode=process
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Install]
	I0429 20:24:35.444947    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:24:35.457957    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.500818    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:24:35.548559    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.585869    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.622879    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:24:35.694256    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.721660    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:35.757211    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:24:35.773795    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:24:35.779277    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:24:35.793892    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:24:35.813834    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:24:35.865638    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:24:36.085117    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:24:36.291781    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:24:36.291781    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:24:36.337637    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:36.567033    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:39.106704    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5396504s)
	I0429 20:24:39.121937    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 20:24:39.164421    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.201973    6560 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 20:24:39.432817    6560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 20:24:39.648494    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:39.872471    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 20:24:39.918782    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.959078    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:40.189711    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 20:24:40.314827    6560 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 20:24:40.327765    6560 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 20:24:40.337989    6560 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 20:24:40.338077    6560 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 20:24:40.338077    6560 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Modify: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Change: 2024-04-29 20:24:40.227771386 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] >  Birth: -
	I0429 20:24:40.338228    6560 start.go:562] Will wait 60s for crictl version
	I0429 20:24:40.353543    6560 ssh_runner.go:195] Run: which crictl
	I0429 20:24:40.359551    6560 command_runner.go:130] > /usr/bin/crictl
	I0429 20:24:40.372542    6560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:24:40.422534    6560 command_runner.go:130] > Version:  0.1.0
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeName:  docker
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 20:24:40.422534    6560 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 20:24:40.433531    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.468470    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.477791    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.510922    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.518057    6560 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 20:24:40.518283    6560 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: 172.17.240.1/20
	I0429 20:24:40.538782    6560 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 20:24:40.546082    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:40.569927    6560 kubeadm.go:877] updating cluster {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:24:40.570125    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:24:40.581034    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:40.605162    6560 docker.go:685] Got preloaded images: 
	I0429 20:24:40.605162    6560 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 20:24:40.617894    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:40.637456    6560 command_runner.go:139] > {"Repositories":{}}
	I0429 20:24:40.652557    6560 ssh_runner.go:195] Run: which lz4
	I0429 20:24:40.659728    6560 command_runner.go:130] > /usr/bin/lz4
	I0429 20:24:40.659728    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 20:24:40.676390    6560 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:24:40.682600    6560 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 20:24:43.151463    6560 docker.go:649] duration metric: took 2.4917153s to copy over tarball
	I0429 20:24:43.166991    6560 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:24:51.777678    6560 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6106197s)
	I0429 20:24:51.777678    6560 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:24:51.848689    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:51.869772    6560 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0429 20:24:51.869772    6560 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 20:24:51.923721    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:52.150884    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:55.504316    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3534062s)
	I0429 20:24:55.515091    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:24:55.540357    6560 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 20:24:55.540357    6560 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:24:55.540557    6560 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 20:24:55.540557    6560 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:24:55.540557    6560 kubeadm.go:928] updating node { 172.17.241.25 8443 v1.30.0 docker true true} ...
	I0429 20:24:55.540557    6560 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-515700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.241.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:24:55.550945    6560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 20:24:55.586940    6560 command_runner.go:130] > cgroupfs
	I0429 20:24:55.587354    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:24:55.587354    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:24:55.587354    6560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:24:55.587354    6560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.241.25 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-515700 NodeName:multinode-515700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.241.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.241.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:24:55.587882    6560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.241.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-515700"
	  kubeletExtraArgs:
	    node-ip: 172.17.241.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.241.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:24:55.601173    6560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubeadm
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubectl
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubelet
	I0429 20:24:55.622022    6560 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:24:55.633924    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:24:55.654273    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 20:24:55.692289    6560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:24:55.726319    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:24:55.774801    6560 ssh_runner.go:195] Run: grep 172.17.241.25	control-plane.minikube.internal$ /etc/hosts
	I0429 20:24:55.781653    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.241.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:55.820570    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:56.051044    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:24:56.087660    6560 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700 for IP: 172.17.241.25
	I0429 20:24:56.087753    6560 certs.go:194] generating shared ca certs ...
	I0429 20:24:56.087824    6560 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 20:24:56.089063    6560 certs.go:256] generating profile certs ...
	I0429 20:24:56.089855    6560 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key
	I0429 20:24:56.089855    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt with IP's: []
	I0429 20:24:56.283640    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt ...
	I0429 20:24:56.284633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt: {Name:mk1286f657dae134d1e4806ec4fc1d780c02f0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.285633    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key ...
	I0429 20:24:56.285633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key: {Name:mka98d4501f3f942abed1092b1c97c4a2bbd30cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.286633    6560 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d
	I0429 20:24:56.287300    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.241.25]
	I0429 20:24:56.456862    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d ...
	I0429 20:24:56.456862    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d: {Name:mk09d828589f59d94791e90fc999c9ce1101118e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d ...
	I0429 20:24:56.458476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d: {Name:mk92ebf0409a99e4a3e3430ff86080f164f4bc0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458796    6560 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt
	I0429 20:24:56.473961    6560 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key
	I0429 20:24:56.474965    6560 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key
	I0429 20:24:56.474965    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt with IP's: []
	I0429 20:24:56.680472    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt ...
	I0429 20:24:56.680472    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt: {Name:mkc600562c7738e3eec9de4025428e3048df463a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.682476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key ...
	I0429 20:24:56.682476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key: {Name:mkc9ba6e1afbc9ca05ce8802b568a72bfd19a90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 20:24:56.685482    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 20:24:56.693323    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 20:24:56.701358    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 20:24:56.702409    6560 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 20:24:56.702718    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 20:24:56.702843    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 20:24:56.705315    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:24:56.758912    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 20:24:56.809584    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:24:56.860874    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:24:56.918708    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:24:56.969377    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:24:57.018903    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:24:57.070438    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:24:57.119823    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:24:57.168671    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 20:24:57.216697    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 20:24:57.263605    6560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:24:57.314590    6560 ssh_runner.go:195] Run: openssl version
	I0429 20:24:57.325614    6560 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 20:24:57.340061    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:24:57.374639    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.394971    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.404667    6560 command_runner.go:130] > b5213941
	I0429 20:24:57.419162    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:24:57.454540    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 20:24:57.494441    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.501867    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.502224    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.517134    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.527174    6560 command_runner.go:130] > 51391683
	I0429 20:24:57.544472    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 20:24:57.579789    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 20:24:57.613535    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622605    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622696    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.637764    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.649176    6560 command_runner.go:130] > 3ec20f2e
	I0429 20:24:57.665410    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:24:57.708796    6560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:24:57.716466    6560 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717133    6560 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717510    6560 kubeadm.go:391] StartCluster: {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:24:57.729105    6560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 20:24:57.771112    6560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 20:24:57.792952    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0429 20:24:57.807601    6560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:24:57.837965    6560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 20:24:57.856820    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:156] found existing configuration files:
	
	I0429 20:24:57.872870    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:24:57.892109    6560 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.892549    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.905782    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:24:57.939062    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:24:57.957607    6560 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.957753    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.972479    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:24:58.006849    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.025918    6560 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.025918    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.039054    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.072026    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:24:58.092314    6560 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.092673    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.105776    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:24:58.124274    6560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:24:58.562957    6560 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:24:58.562957    6560 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:25:12.186137    6560 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186137    6560 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186277    6560 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 20:25:12.186320    6560 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:25:12.186516    6560 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.187085    6560 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.187131    6560 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.190071    6560 out.go:204]   - Generating certificates and keys ...
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190667    6560 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.191251    6560 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191251    6560 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 20:25:12.193040    6560 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193086    6560 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193701    6560 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193701    6560 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.198949    6560 out.go:204]   - Booting up control plane ...
	I0429 20:25:12.193843    6560 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199855    6560 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:25:12.200494    6560 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200494    6560 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201207    6560 command_runner.go:130] > [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.202201    6560 command_runner.go:130] > [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 out.go:204]   - Configuring RBAC rules ...
	I0429 20:25:12.202201    6560 command_runner.go:130] > [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.204361    6560 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.206433    6560 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206433    6560 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206983    6560 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 kubeadm.go:309] 
	I0429 20:25:12.207142    6560 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 kubeadm.go:309] 
	I0429 20:25:12.207365    6560 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207404    6560 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207464    6560 kubeadm.go:309] 
	I0429 20:25:12.207514    6560 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0429 20:25:12.207589    6560 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:25:12.207764    6560 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.207807    6560 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.208030    6560 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 kubeadm.go:309] 
	I0429 20:25:12.208230    6560 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208230    6560 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208281    6560 kubeadm.go:309] 
	I0429 20:25:12.208375    6560 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208375    6560 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208442    6560 kubeadm.go:309] 
	I0429 20:25:12.208643    6560 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0429 20:25:12.208733    6560 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:25:12.208874    6560 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.208936    6560 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.209014    6560 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209014    6560 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209735    6560 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209735    6560 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--control-plane 
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--control-plane 
	I0429 20:25:12.210277    6560 kubeadm.go:309] 
	I0429 20:25:12.210538    6560 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] 
	I0429 20:25:12.210726    6560 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210726    6560 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210937    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:25:12.211197    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:25:12.215717    6560 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 20:25:12.234164    6560 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 20:25:12.242817    6560 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: 2024-04-29 20:23:14.801002600 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Change: 2024-04-29 20:23:06.257000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] >  Birth: -
	I0429 20:25:12.242817    6560 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 20:25:12.242817    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 20:25:12.301387    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 20:25:13.060621    6560 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > serviceaccount/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > daemonset.apps/kindnet created
	I0429 20:25:13.060707    6560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-515700 minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=multinode-515700 minikube.k8s.io/primary=true
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.092072    6560 command_runner.go:130] > -16
	I0429 20:25:13.092113    6560 ops.go:34] apiserver oom_adj: -16
	I0429 20:25:13.290753    6560 command_runner.go:130] > node/multinode-515700 labeled
	I0429 20:25:13.292700    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0429 20:25:13.306335    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.426974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:13.819653    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.947766    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.320587    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.442246    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.822864    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.943107    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.309117    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.432718    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.814070    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.933861    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.317878    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.440680    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.819594    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.942387    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.322995    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.435199    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.809136    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.932465    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.308164    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.429047    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.808817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.928476    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.310090    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.432479    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.815590    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.929079    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.321723    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.442512    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.819466    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.933742    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.309314    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.424974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.811819    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.952603    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.316794    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.432125    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.808890    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.925838    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.310021    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.434432    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.819369    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.948876    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.307817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.457947    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.818980    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.932003    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:25.308659    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:25.488149    6560 command_runner.go:130] > NAME      SECRETS   AGE
	I0429 20:25:25.488217    6560 command_runner.go:130] > default   0         1s
	I0429 20:25:25.489686    6560 kubeadm.go:1107] duration metric: took 12.4288824s to wait for elevateKubeSystemPrivileges
	W0429 20:25:25.489686    6560 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:25:25.489686    6560 kubeadm.go:393] duration metric: took 27.7719601s to StartCluster
	I0429 20:25:25.490694    6560 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.490694    6560 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:25.491677    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.493697    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 20:25:25.493697    6560 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:25:25.498680    6560 out.go:177] * Verifying Kubernetes components...
	I0429 20:25:25.493697    6560 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:25:25.494664    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:25.504657    6560 addons.go:69] Setting storage-provisioner=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:69] Setting default-storageclass=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:234] Setting addon storage-provisioner=true in "multinode-515700"
	I0429 20:25:25.504657    6560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-515700"
	I0429 20:25:25.504657    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.520673    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:25:25.944109    6560 command_runner.go:130] > apiVersion: v1
	I0429 20:25:25.944267    6560 command_runner.go:130] > data:
	I0429 20:25:25.944267    6560 command_runner.go:130] >   Corefile: |
	I0429 20:25:25.944367    6560 command_runner.go:130] >     .:53 {
	I0429 20:25:25.944367    6560 command_runner.go:130] >         errors
	I0429 20:25:25.944367    6560 command_runner.go:130] >         health {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            lameduck 5s
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         ready
	I0429 20:25:25.944367    6560 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            pods insecure
	I0429 20:25:25.944367    6560 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0429 20:25:25.944367    6560 command_runner.go:130] >            ttl 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         prometheus :9153
	I0429 20:25:25.944367    6560 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            max_concurrent 1000
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         cache 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loop
	I0429 20:25:25.944367    6560 command_runner.go:130] >         reload
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loadbalance
	I0429 20:25:25.944367    6560 command_runner.go:130] >     }
	I0429 20:25:25.944367    6560 command_runner.go:130] > kind: ConfigMap
	I0429 20:25:25.944367    6560 command_runner.go:130] > metadata:
	I0429 20:25:25.944367    6560 command_runner.go:130] >   creationTimestamp: "2024-04-29T20:25:11Z"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   name: coredns
	I0429 20:25:25.944367    6560 command_runner.go:130] >   namespace: kube-system
	I0429 20:25:25.944367    6560 command_runner.go:130] >   resourceVersion: "265"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   uid: af2c186a-a14a-4671-8545-05c5ef5d4a89
	I0429 20:25:25.949389    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 20:25:26.023682    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:25:26.408680    6560 command_runner.go:130] > configmap/coredns replaced
	I0429 20:25:26.414254    6560 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.417677    6560 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 20:25:26.417677    6560 node_ready.go:35] waiting up to 6m0s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.435291    6560 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.437034    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.438430    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Audit-Id: a2ae57e5-53a3-4342-ad5c-c2149e87ef04
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438430    6560 round_trippers.go:580]     Audit-Id: 2e6b22a8-9874-417c-a6a5-f7b7437121f7
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438909    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.439086    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:26.440203    6560 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.440298    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.440406    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.440406    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.440519    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:26.440519    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.459913    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.459962    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Audit-Id: 9ca07d91-957f-4992-9642-97b01e07dde3
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.459962    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"393","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.918339    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.918339    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918339    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918339    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.918300    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.918498    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918580    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918580    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Audit-Id: 70383541-35df-461a-b4fb-41bd3b56f11d
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Audit-Id: e628428d-1384-4709-a32e-084c9dfec614
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.929164    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"404","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.929400    6560 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-515700" context rescaled to 1 replicas
	I0429 20:25:26.929400    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.426913    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.426913    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.426913    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.426913    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.430795    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:27.430795    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Audit-Id: e4e6b2b1-e008-4f2a-bae4-3596fce97666
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.430996    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.431340    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.789217    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.789348    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.792426    6560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:25:27.791141    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:27.795103    6560 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:27.795205    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:25:27.795205    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.795205    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:27.795924    6560 addons.go:234] Setting addon default-storageclass=true in "multinode-515700"
	I0429 20:25:27.795924    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:27.796802    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.922993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.923088    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.923175    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.923175    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.929435    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:27.929435    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.929545    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.929545    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.929638    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Audit-Id: 8ef77f9f-d18f-4fa7-ab77-85c137602c84
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.930046    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.432611    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.432611    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.432611    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.432611    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.441320    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:28.441862    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Audit-Id: d32cd9f8-494c-4a69-b028-606c7d354657
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.442308    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.442914    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:28.927674    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.927674    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.927674    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.927897    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.933213    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:28.933794    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Audit-Id: 75d40b2c-c2ed-4221-9361-88591791a649
	I0429 20:25:28.934208    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.422724    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.422898    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.422898    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.422975    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.426431    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:29.426876    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Audit-Id: dde47b6c-069b-408d-a5c6-0a2ea7439643
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.427261    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.918308    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.918308    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.918308    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.918407    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.921072    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:29.921072    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.921072    6560 round_trippers.go:580]     Audit-Id: d4643df6-68ad-4c4c-9604-a5a4d019fba1
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.922076    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.057466    6560 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:30.057636    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:25:30.057750    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:30.145026    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:30.424041    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.424310    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.424310    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.424310    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.428606    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.429051    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.429263    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Audit-Id: 2c59a467-8079-41ed-ac1d-f96dd660d343
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.429435    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.931993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.931993    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.931993    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.931993    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.936635    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.936635    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.937644    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Audit-Id: 9214de5b-8221-4c68-b6b9-a92fe7d41fd1
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.938175    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.939066    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:31.423866    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.423866    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.423866    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.423988    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.427054    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.427827    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.427827    6560 round_trippers.go:580]     Audit-Id: 5f66acb8-ef38-4220-83b6-6e87fbec6f58
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.427869    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:31.932664    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.932664    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.932761    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.932761    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.936680    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.936680    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Audit-Id: f9fb721e-ccaf-4e33-ac69-8ed840761191
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.937009    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.312723    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:32.424680    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.424953    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.424953    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.424953    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.428624    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:32.428906    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.428906    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Audit-Id: d3a39f3a-571d-46c0-a442-edf136da8a11
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.429531    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.858444    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:32.926226    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.926317    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.926393    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.926393    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.929204    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:32.929583    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.929583    6560 round_trippers.go:580]     Audit-Id: 55fc987d-65c0-4ac8-95d2-7fa4185e179b
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.929734    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.930327    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.034553    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:33.425759    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.425833    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.425833    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.425833    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.428624    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.429656    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Audit-Id: d581fce7-8906-48d7-8e13-2d1aba9dec04
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.429916    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.430438    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:33.930984    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.931053    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.931053    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.931053    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.933717    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.933717    6560 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.933717    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.933717    6560 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Audit-Id: 680ed792-db71-4b29-abb9-40f7154e8b1e
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.933717    6560 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0429 20:25:33.933717    6560 command_runner.go:130] > pod/storage-provisioner created
	I0429 20:25:33.933717    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.428102    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.428102    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.428102    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.428102    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.431722    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:34.432624    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Audit-Id: 86cc0608-3000-42b0-9ce8-4223e32d60c3
	I0429 20:25:34.432684    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.433082    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.932029    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.932316    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.932316    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.932316    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.936749    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:34.936749    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Audit-Id: 0e63a4db-3dd4-4e74-8b79-c019b6b97e89
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.937415    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:35.024893    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:35.025151    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:35.025317    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:35.170634    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:35.371184    6560 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0429 20:25:35.371418    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 20:25:35.371571    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.371571    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.371571    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.380781    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:35.381213    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Audit-Id: 31f5e265-3d38-4520-88d0-33f47325189c
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Length: 1273
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.381380    6560 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 20:25:35.382106    6560 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.382183    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 20:25:35.382183    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:35.382269    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.390758    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.390758    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.390758    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Audit-Id: 4dbb716e-2d97-4c38-b342-f63e7d38a4d0
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Length: 1220
	I0429 20:25:35.391190    6560 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.395279    6560 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 20:25:35.397530    6560 addons.go:505] duration metric: took 9.9037568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 20:25:35.421733    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.421733    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.421733    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.421733    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.452743    6560 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 20:25:35.452743    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.452743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Audit-Id: 316d0393-7ba5-4629-87cb-7ae54d0ea965
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.454477    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.455068    6560 node_ready.go:49] node "multinode-515700" has status "Ready":"True"
	I0429 20:25:35.455148    6560 node_ready.go:38] duration metric: took 9.0374019s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:35.455148    6560 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:35.455213    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:35.455213    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.455213    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.455213    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.473128    6560 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 20:25:35.473128    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Audit-Id: 81e159c0-b703-47ba-a9f3-82cc907b8705
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.475820    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"431","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0429 20:25:35.481714    6560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:35.482325    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.482379    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.482379    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.482432    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.491093    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.491093    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Audit-Id: a2eb7ca2-d415-4a7c-a1f0-1ac743bd8f82
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.492090    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.493335    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.493335    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.493335    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.493419    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.496084    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:35.496084    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.496084    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Audit-Id: f61c97ad-ee7a-4666-9244-d7d2091b5d09
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.497097    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.497131    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.497332    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.991323    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.991323    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.991323    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.991323    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.995451    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:35.995451    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Audit-Id: faa8a1a4-279f-4dc3-99c8-8c3b9e9ed746
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:35.996592    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.997239    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.997292    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.997292    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.997292    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.999987    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:35.999987    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Audit-Id: 070c7fff-f707-4b9a-9aef-031cedc68a8c
	I0429 20:25:36.000411    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.483004    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.483004    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.483004    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.483004    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.488152    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:36.488152    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.488152    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.488678    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Audit-Id: fb5cc675-b39d-4cb0-ba8c-24140b3d95e8
	I0429 20:25:36.489818    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.490926    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.490926    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.490985    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.490985    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.494654    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:36.494654    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Audit-Id: fe6d880a-4cf8-4b10-8c7f-debde123eafc
	I0429 20:25:36.495423    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.991643    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.991643    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.991643    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.991855    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.996384    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:36.996384    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Audit-Id: 933a6dd5-a0f7-4380-8189-3e378a8a620d
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:36.997332    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.997760    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.997760    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.997760    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.997760    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.000889    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.000889    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.001211    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Audit-Id: 0342e743-45a6-4fd7-97be-55a766946396
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.001274    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.001759    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.495712    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:37.495712    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.495712    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.495712    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.508671    6560 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 20:25:37.508671    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Audit-Id: d30c6154-a41b-4a0d-976f-d19f40e67223
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.508671    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0429 20:25:37.510663    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.510663    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.510663    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.510663    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.513686    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.513686    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Audit-Id: 397b83a5-95f9-4df8-a76b-042ecc96922c
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.514662    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.514662    6560 pod_ready.go:92] pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.514662    6560 pod_ready.go:81] duration metric: took 2.0329329s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-515700
	I0429 20:25:37.514662    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.514662    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.514662    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.517691    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.517691    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Audit-Id: df53f071-06ed-4797-a51b-7d01b84cac86
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.518412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-515700","namespace":"kube-system","uid":"85f2dc9a-17b5-413c-ab83-d3cbe955571e","resourceVersion":"319","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.241.25:2379","kubernetes.io/config.hash":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.mirror":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.seen":"2024-04-29T20:25:11.718525866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0429 20:25:37.519044    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.519044    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.519124    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.519124    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.521788    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.521788    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Audit-Id: ee5fdb3e-9869-4cd7-996a-a25b453aeb6b
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.521944    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.522769    6560 pod_ready.go:92] pod "etcd-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.522844    6560 pod_ready.go:81] duration metric: took 8.1819ms for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.522844    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.523015    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-515700
	I0429 20:25:37.523015    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.523079    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.523079    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.525575    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.525575    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Audit-Id: cd9d851c-f606-48c9-8da3-3d194ab5464f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.526015    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-515700","namespace":"kube-system","uid":"f5a212eb-87a9-476a-981a-9f31731f39e6","resourceVersion":"312","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.241.25:8443","kubernetes.io/config.hash":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.mirror":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.seen":"2024-04-29T20:25:11.718530866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0429 20:25:37.526356    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.526356    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.526356    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.526356    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.535954    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:37.535954    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Audit-Id: 018aa21f-d408-4777-b54c-eb7aa2295a34
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.536470    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.536974    6560 pod_ready.go:92] pod "kube-apiserver-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.537034    6560 pod_ready.go:81] duration metric: took 14.0881ms for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537034    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537183    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-515700
	I0429 20:25:37.537276    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.537297    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.537297    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.539964    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.539964    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Audit-Id: d3232756-fc07-4b33-a3b5-989d2932cec4
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.541274    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-515700","namespace":"kube-system","uid":"2c9ba563-c2af-45b7-bc1e-bf39759a339b","resourceVersion":"315","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.mirror":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.seen":"2024-04-29T20:25:11.718532066Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0429 20:25:37.541935    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.541935    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.541935    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.541935    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.555960    6560 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 20:25:37.555960    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Audit-Id: 2d117219-3b1a-47fe-99a4-7e5aea7e84d3
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.555960    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.555960    6560 pod_ready.go:92] pod "kube-controller-manager-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.555960    6560 pod_ready.go:81] duration metric: took 18.9251ms for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.555960    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.556943    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gx5x
	I0429 20:25:37.556943    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.556943    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.556943    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.559965    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.560477    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.560477    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Audit-Id: 14e6d1be-eac6-4f20-9621-b409c951fae1
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.560781    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6gx5x","generateName":"kube-proxy-","namespace":"kube-system","uid":"886ac698-7e9b-431b-b822-577331b02f41","resourceVersion":"407","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"027f1d05-009f-4199-81e9-45b0a2d3710f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"027f1d05-009f-4199-81e9-45b0a2d3710f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0429 20:25:37.561552    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.561581    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.561581    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.561581    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.567713    6560 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 20:25:37.567713    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Audit-Id: 678df177-6944-4d30-b889-62528c06bab2
	I0429 20:25:37.567713    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.568391    6560 pod_ready.go:92] pod "kube-proxy-6gx5x" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.568391    6560 pod_ready.go:81] duration metric: took 12.4313ms for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.568391    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.701559    6560 request.go:629] Waited for 132.9214ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701779    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701853    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.701853    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.701853    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.705314    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.706129    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.706129    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.706183    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Audit-Id: 4fb010ad-4d68-4aa0-9ba4-f68d04faa9e8
	I0429 20:25:37.706412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-515700","namespace":"kube-system","uid":"096d3e94-25ba-49b3-b329-6fb47fc88f25","resourceVersion":"334","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.mirror":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.seen":"2024-04-29T20:25:11.718533166Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0429 20:25:37.905204    6560 request.go:629] Waited for 197.8802ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.905322    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.905466    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.909057    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.909159    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Audit-Id: a6cecf7e-83ad-4d5f-8cbb-a65ced7e83ce
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.909286    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.909697    6560 pod_ready.go:92] pod "kube-scheduler-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.909697    6560 pod_ready.go:81] duration metric: took 341.3037ms for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.909697    6560 pod_ready.go:38] duration metric: took 2.4545299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:37.909697    6560 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:25:37.923721    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:25:37.956142    6560 command_runner.go:130] > 2047
	I0429 20:25:37.956226    6560 api_server.go:72] duration metric: took 12.462433s to wait for apiserver process to appear ...
	I0429 20:25:37.956226    6560 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:25:37.956330    6560 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:25:37.965150    6560 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:25:37.965332    6560 round_trippers.go:463] GET https://172.17.241.25:8443/version
	I0429 20:25:37.965364    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.965364    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.965364    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.967124    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:37.967124    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Audit-Id: c3b17e5f-8eb5-4422-bcd1-48cea5831311
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.967423    6560 round_trippers.go:580]     Content-Length: 263
	I0429 20:25:37.967423    6560 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 20:25:37.967530    6560 api_server.go:141] control plane version: v1.30.0
	I0429 20:25:37.967530    6560 api_server.go:131] duration metric: took 11.2306ms to wait for apiserver health ...
	I0429 20:25:37.967629    6560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:25:38.109818    6560 request.go:629] Waited for 142.1878ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110201    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110256    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.110275    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.110275    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.118070    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.118070    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Audit-Id: 557b3073-d14e-4919-8133-995d5b042d22
	I0429 20:25:38.119823    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.123197    6560 system_pods.go:59] 8 kube-system pods found
	I0429 20:25:38.123197    6560 system_pods.go:61] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.123197    6560 system_pods.go:74] duration metric: took 155.566ms to wait for pod list to return data ...
	I0429 20:25:38.123197    6560 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:25:38.295950    6560 request.go:629] Waited for 172.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.296300    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.296300    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.300424    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:38.300424    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Content-Length: 261
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Audit-Id: 7466bf5b-fa07-4a6b-bc06-274738fc9259
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.300674    6560 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"13c4332f-9236-4f04-9e46-f5a98bc3d731","resourceVersion":"343","creationTimestamp":"2024-04-29T20:25:24Z"}}]}
	I0429 20:25:38.300674    6560 default_sa.go:45] found service account: "default"
	I0429 20:25:38.300674    6560 default_sa.go:55] duration metric: took 177.4758ms for default service account to be created ...
	I0429 20:25:38.300674    6560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:25:38.498686    6560 request.go:629] Waited for 197.291ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.498782    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.499005    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.499005    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.499005    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.506756    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.507387    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Audit-Id: ffc5efdb-4263-4450-8ff2-c1bb3f979300
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.507485    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.507503    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.508809    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.512231    6560 system_pods.go:86] 8 kube-system pods found
	I0429 20:25:38.512305    6560 system_pods.go:89] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.512305    6560 system_pods.go:89] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.512451    6560 system_pods.go:89] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.512451    6560 system_pods.go:126] duration metric: took 211.7756ms to wait for k8s-apps to be running ...
	I0429 20:25:38.512451    6560 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:25:38.526027    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:25:38.555837    6560 system_svc.go:56] duration metric: took 43.3852ms WaitForService to wait for kubelet
	I0429 20:25:38.555837    6560 kubeadm.go:576] duration metric: took 13.0620394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:25:38.556007    6560 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:25:38.701455    6560 request.go:629] Waited for 145.1917ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701896    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701917    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.701917    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.702032    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.709221    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.709221    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Audit-Id: 9241b2a0-c483-4bfb-8a19-8f5c9b610b53
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.709221    6560 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0429 20:25:38.710061    6560 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:25:38.710061    6560 node_conditions.go:123] node cpu capacity is 2
	I0429 20:25:38.710061    6560 node_conditions.go:105] duration metric: took 154.0529ms to run NodePressure ...
	I0429 20:25:38.710061    6560 start.go:240] waiting for startup goroutines ...
	I0429 20:25:38.710061    6560 start.go:245] waiting for cluster config update ...
	I0429 20:25:38.710061    6560 start.go:254] writing updated cluster config ...
	I0429 20:25:38.717493    6560 out.go:177] 
	I0429 20:25:38.721129    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.735840    6560 out.go:177] * Starting "multinode-515700-m02" worker node in "multinode-515700" cluster
	I0429 20:25:38.738518    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:25:38.738518    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:25:38.738983    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:25:38.739240    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:25:38.739481    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.745029    6560 start.go:360] acquireMachinesLock for multinode-515700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:25:38.745029    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700-m02"
	I0429 20:25:38.745029    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 20:25:38.745575    6560 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 20:25:38.748852    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:25:38.748852    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:25:38.748852    6560 client.go:168] LocalClient.Create starting
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:40.746212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:25:42.605453    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:47.992432    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:47.992702    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:47.996014    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:25:48.551162    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: Creating VM...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:51.874174    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:25:51.874221    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:53.736899    6560 main.go:141] libmachine: Creating VHD
	I0429 20:25:53.737514    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D65FFD0C-285E-44D0-8723-21544BDDE71A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:25:57.529054    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:00.734035    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -SizeBytes 20000MB
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:03.314283    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-515700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700-m02 -DynamicMemoryEnabled $false
	I0429 20:26:09.480100    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700-m02 -Count 2
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:11.716979    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\boot2docker.iso'
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:14.377298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd'
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:17.090909    6560 main.go:141] libmachine: Starting VM...
	I0429 20:26:17.090909    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700-m02
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:22.527096    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:26.113296    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:28.339433    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:30.953587    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:30.953628    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:31.955478    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:34.197688    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:34.197831    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:34.197901    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:37.817016    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:42.682666    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:42.683603    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:43.685897    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:48.604877    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:48.604915    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:48.604999    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:50.794876    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:50.795093    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:50.795407    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:26:50.795407    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:52.992195    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:52.992243    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:52.992331    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:55.630552    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:55.641728    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:26:55.642758    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:26:55.769182    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:26:55.769182    6560 buildroot.go:166] provisioning hostname "multinode-515700-m02"
	I0429 20:26:55.769333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:57.942857    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:57.943721    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:57.943789    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:00.610012    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:00.610498    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:00.617342    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:00.618022    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:00.618022    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700-m02 && echo "multinode-515700-m02" | sudo tee /etc/hostname
	I0429 20:27:00.774430    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700-m02
	
	I0429 20:27:00.775391    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:02.970796    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:02.971352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:02.971577    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:05.640782    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:05.640782    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:05.640782    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:27:05.779330    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:27:05.779330    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:27:05.779435    6560 buildroot.go:174] setting up certificates
	I0429 20:27:05.779435    6560 provision.go:84] configureAuth start
	I0429 20:27:05.779531    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:07.939785    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:10.607752    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:10.608236    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:10.608319    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:15.428095    6560 provision.go:143] copyHostCerts
	I0429 20:27:15.429066    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:27:15.429066    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:27:15.429066    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:27:15.429626    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:27:15.430936    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:27:15.431366    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:27:15.431366    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:27:15.431875    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:27:15.432822    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:27:15.433064    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:27:15.433064    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:27:15.433498    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:27:15.434807    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700-m02 san=[127.0.0.1 172.17.253.145 localhost minikube multinode-515700-m02]
	I0429 20:27:15.511954    6560 provision.go:177] copyRemoteCerts
	I0429 20:27:15.527105    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:27:15.527105    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:20.368198    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:20.368587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:20.368930    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:20.467819    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9406764s)
	I0429 20:27:20.468832    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:27:20.469887    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:27:20.524889    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:27:20.525559    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 20:27:20.578020    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:27:20.578217    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:27:20.634803    6560 provision.go:87] duration metric: took 14.8552541s to configureAuth
	I0429 20:27:20.634874    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:27:20.635533    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:27:20.635638    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:22.779762    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:25.427345    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:25.427345    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:25.428345    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:27:25.562050    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:27:25.562195    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:27:25.562515    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:27:25.562592    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:30.404141    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:30.405195    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:30.412105    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:30.413171    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:30.413700    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.241.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:27:30.578477    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.241.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:27:30.578477    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:32.772580    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:35.465933    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:35.466426    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:35.466509    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:27:37.701893    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:27:37.701981    6560 machine.go:97] duration metric: took 46.9062133s to provisionDockerMachine
	I0429 20:27:37.702052    6560 client.go:171] duration metric: took 1m58.9522849s to LocalClient.Create
	I0429 20:27:37.702194    6560 start.go:167] duration metric: took 1m58.9524269s to libmachine.API.Create "multinode-515700"
	I0429 20:27:37.702194    6560 start.go:293] postStartSetup for "multinode-515700-m02" (driver="hyperv")
	I0429 20:27:37.702194    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:27:37.716028    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:27:37.716028    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:39.888498    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:39.889355    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:39.889707    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:42.576527    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:42.688245    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9721792s)
	I0429 20:27:42.703472    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:27:42.710185    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:27:42.710391    6560 command_runner.go:130] > ID=buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:27:42.710391    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:27:42.710562    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:27:42.710562    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:27:42.710640    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:27:42.712121    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:27:42.712121    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:27:42.725734    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:27:42.745571    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:27:42.798223    6560 start.go:296] duration metric: took 5.0959902s for postStartSetup
	I0429 20:27:42.801718    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:44.985225    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:47.630520    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:27:47.633045    6560 start.go:128] duration metric: took 2m8.8864784s to createHost
	I0429 20:27:47.633167    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:49.823309    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:49.823412    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:49.823495    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:52.524084    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:52.524183    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:52.530451    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:52.531204    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:52.531204    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:27:52.658970    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422472.660345683
	
	I0429 20:27:52.659208    6560 fix.go:216] guest clock: 1714422472.660345683
	I0429 20:27:52.659208    6560 fix.go:229] Guest: 2024-04-29 20:27:52.660345683 +0000 UTC Remote: 2024-04-29 20:27:47.6330452 +0000 UTC m=+346.394263801 (delta=5.027300483s)
	I0429 20:27:52.659208    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:57.461861    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:57.461927    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:57.467747    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:57.468699    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:57.468699    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422472
	I0429 20:27:57.617018    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:27:52 UTC 2024
	
	I0429 20:27:57.617018    6560 fix.go:236] clock set: Mon Apr 29 20:27:52 UTC 2024
	 (err=<nil>)
	I0429 20:27:57.617018    6560 start.go:83] releasing machines lock for "multinode-515700-m02", held for 2m18.8709228s
	I0429 20:27:57.618122    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:59.795247    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:02.475615    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:02.475867    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:02.479078    6560 out.go:177] * Found network options:
	I0429 20:28:02.481434    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.483990    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.486147    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.488513    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 20:28:02.490094    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.492090    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:28:02.492090    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:02.504078    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:28:02.504078    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:07.440744    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.440938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.441026    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.467629    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.629032    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:28:07.630105    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1379759s)
	I0429 20:28:07.630105    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0429 20:28:07.630229    6560 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1259881s)
	W0429 20:28:07.630229    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:28:07.649597    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:28:07.685721    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:28:07.685954    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:28:07.685954    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:07.686227    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:07.722613    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:28:07.736060    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:28:07.771561    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:28:07.793500    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:28:07.809715    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:28:07.846242    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.882404    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:28:07.918280    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.956186    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:28:07.994072    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:28:08.029701    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:28:08.067417    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:28:08.104772    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:28:08.126209    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:28:08.140685    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:28:08.181598    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:08.410362    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:28:08.449856    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:08.466974    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Unit]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:28:08.492900    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:28:08.492900    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:28:08.492900    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Service]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Type=notify
	I0429 20:28:08.492900    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:28:08.492900    6560 command_runner.go:130] > Environment=NO_PROXY=172.17.241.25
	I0429 20:28:08.492900    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:28:08.492900    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:28:08.492900    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:28:08.492900    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:28:08.492900    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:28:08.492900    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:28:08.492900    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:28:08.492900    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:28:08.492900    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:28:08.493891    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:28:08.493891    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:28:08.493891    6560 command_runner.go:130] > Delegate=yes
	I0429 20:28:08.493891    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:28:08.493891    6560 command_runner.go:130] > KillMode=process
	I0429 20:28:08.493891    6560 command_runner.go:130] > [Install]
	I0429 20:28:08.493891    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:28:08.505928    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.548562    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:28:08.606977    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.652185    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.695349    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:28:08.785230    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.816602    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:08.853434    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:28:08.870019    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:28:08.876256    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:28:08.890247    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:28:08.911471    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:28:08.962890    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:28:09.201152    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:28:09.397561    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:28:09.398166    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:28:09.451159    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:09.673084    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:29:10.809648    6560 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 20:29:10.809648    6560 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 20:29:10.809648    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1361028s)
	I0429 20:29:10.827248    6560 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 20:29:10.852173    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 20:29:10.852279    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 20:29:10.852319    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 20:29:10.852739    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854100    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854118    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854142    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854175    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 20:29:10.865335    6560 out.go:177] 
	W0429 20:29:10.865335    6560 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 20:29:10.865335    6560 out.go:239] * 
	W0429 20:29:10.869400    6560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:29:10.876700    6560 out.go:177] 
	
	
	==> Docker <==
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.311805235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.311843635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:49 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:49.314238729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:49 multinode-515700 cri-dockerd[1230]: time="2024-04-29T20:29:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1a58f6d29ec95da5888905a6941e048b2c50f12c8ae76975e21ae109c16a8bb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 20:29:50 multinode-515700 cri-dockerd[1230]: time="2024-04-29T20:29:50Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.935705225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.935856331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.935874732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.936415956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:55 multinode-515700 dockerd[1325]: 2024/04/29 20:42:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:55 multinode-515700 dockerd[1325]: 2024/04/29 20:42:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	32c6f043cec2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   e1a58f6d29ec9       busybox-fc5497c4f-dv5v8
	15da1b832ef20       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   73ab97e30d3d0       coredns-7db6d8ff4d-drcsj
	b26e455e6f823       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   0274116a036cf       storage-provisioner
	11141cf0a01e5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              21 minutes ago      Running             kindnet-cni               0                   5c226cf922db1       kindnet-lt84t
	8d116812e2fa7       a0bf559e280cf                                                                                         22 minutes ago      Running             kube-proxy                0                   c4e88976a7bb5       kube-proxy-6gx5x
	9b9ad8fbed853       c42f13656d0b2                                                                                         22 minutes ago      Running             kube-apiserver            0                   e1040c321d522       kube-apiserver-multinode-515700
	7748681b165fb       259c8277fcbbc                                                                                         22 minutes ago      Running             kube-scheduler            0                   ab47450efbe05       kube-scheduler-multinode-515700
	01f30fac305bc       3861cfcd7c04c                                                                                         22 minutes ago      Running             etcd                      0                   b5202cca492c4       etcd-multinode-515700
	c5de44f1f1066       c7aad43836fa5                                                                                         22 minutes ago      Running             kube-controller-manager   0                   4ae9818227910       kube-controller-manager-multinode-515700
	
	
	==> coredns [15da1b832ef2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 658b75f59357881579d818bea4574a099ffd8bf4e34cb2d6414c381890635887b0895574e607ab48d69c0bc2657640404a00a48de79c5b96ce27f6a68e70a912
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36587 - 14172 "HINFO IN 4725538422205950284.7962128480288568612. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062354244s
	[INFO] 10.244.0.3:46156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244102s
	[INFO] 10.244.0.3:48057 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.210765088s
	[INFO] 10.244.0.3:47676 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15403778s
	[INFO] 10.244.0.3:57534 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.237328274s
	[INFO] 10.244.0.3:38726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000345103s
	[INFO] 10.244.0.3:54844 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.04703092s
	[INFO] 10.244.0.3:51897 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000879808s
	[INFO] 10.244.0.3:57925 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122101s
	[INFO] 10.244.0.3:39997 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012692914s
	[INFO] 10.244.0.3:37301 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000333403s
	[INFO] 10.244.0.3:60294 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172702s
	[INFO] 10.244.0.3:33135 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000250902s
	[INFO] 10.244.0.3:46585 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141701s
	[INFO] 10.244.0.3:41280 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127902s
	[INFO] 10.244.0.3:46602 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000220001s
	[INFO] 10.244.0.3:47802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077001s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	[INFO] 10.244.0.3:45741 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166201s
	[INFO] 10.244.0.3:48683 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	[INFO] 10.244.0.3:47252 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159702s
	
	
	==> describe nodes <==
	Name:               multinode-515700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:47:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:45:36 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:45:36 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:45:36 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:45:36 +0000   Mon, 29 Apr 2024 20:25:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.241.25
	  Hostname:    multinode-515700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc8de88647d944658545c7ae4a702aea
	  System UUID:                68adc21b-67d2-5446-9537-0dea9fd880a0
	  Boot ID:                    9507eca5-5f1f-4862-974e-a61fb27048d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dv5v8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-drcsj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-multinode-515700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-lt84t                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-multinode-515700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-multinode-515700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-6gx5x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-multinode-515700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                node-controller  Node multinode-515700 event: Registered Node multinode-515700 in Controller
	  Normal  NodeReady                21m                kubelet          Node multinode-515700 status is now: NodeReady
	
	
	Name:               multinode-515700-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T20_46_05_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:47:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:46:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:46:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:46:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:46:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.240.210
	  Hostname:    multinode-515700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cba11e160ba341e08600b430623543e3
	  System UUID:                c93866d4-f3c2-8b4a-808f-8a49ef3473c2
	  Boot ID:                    eca6382a-2500-4a1e-9ddd-477f0ebe0910
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2t4c2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-svhl6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      82s
	  kube-system                 kube-proxy-ds5fx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 70s                kube-proxy       
	  Normal  NodeHasSufficientMemory  82s (x2 over 83s)  kubelet          Node multinode-515700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x2 over 83s)  kubelet          Node multinode-515700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x2 over 83s)  kubelet          Node multinode-515700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           78s                node-controller  Node multinode-515700-m03 event: Registered Node multinode-515700-m03 in Controller
	  Normal  NodeReady                59s                kubelet          Node multinode-515700-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 20:24] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.212417] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +31.830340] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.112166] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.613568] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.218400] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.259380] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.863180] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.213718] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.233297] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.301716] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.953055] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.129851] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.793087] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[Apr29 20:25] systemd-fstab-generator[1710]: Ignoring "noauto" option for root device
	[  +0.110579] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.112113] systemd-fstab-generator[2108]: Ignoring "noauto" option for root device
	[  +0.165104] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.220827] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.255309] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.248279] kauditd_printk_skb: 51 callbacks suppressed
	[Apr29 20:26] hrtimer: interrupt took 3466547 ns
	[Apr29 20:29] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [01f30fac305b] <==
	{"level":"info","ts":"2024-04-29T20:43:34.867693Z","caller":"traceutil/trace.go:171","msg":"trace[240417427] linearizableReadLoop","detail":"{readStateIndex:1570; appliedIndex:1569; }","duration":"127.690146ms","start":"2024-04-29T20:43:34.739984Z","end":"2024-04-29T20:43:34.867674Z","steps":["trace[240417427] 'read index received'  (duration: 127.669446ms)","trace[240417427] 'applied index is now lower than readState.Index'  (duration: 20.2µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:43:34.867872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.868347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:43:34.868001Z","caller":"traceutil/trace.go:171","msg":"trace[1472637471] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1337; }","duration":"128.044349ms","start":"2024-04-29T20:43:34.739947Z","end":"2024-04-29T20:43:34.867992Z","steps":["trace[1472637471] 'agreement among raft nodes before linearized reading'  (duration: 127.795647ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:43:34.868426Z","caller":"traceutil/trace.go:171","msg":"trace[764321283] transaction","detail":"{read_only:false; response_revision:1337; number_of_response:1; }","duration":"224.704665ms","start":"2024-04-29T20:43:34.643711Z","end":"2024-04-29T20:43:34.868415Z","steps":["trace[764321283] 'process raft request'  (duration: 223.852758ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:45:06.303388Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1169}
	{"level":"info","ts":"2024-04-29T20:45:06.312061Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1169,"took":"8.045253ms","hash":475365449,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-29T20:45:06.312246Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":475365449,"revision":1169,"compact-revision":929}
	{"level":"info","ts":"2024-04-29T20:45:58.156567Z","caller":"traceutil/trace.go:171","msg":"trace[785089805] transaction","detail":"{read_only:false; response_revision:1453; number_of_response:1; }","duration":"170.534651ms","start":"2024-04-29T20:45:57.986006Z","end":"2024-04-29T20:45:58.156541Z","steps":["trace[785089805] 'process raft request'  (duration: 170.224549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:45:58.532911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.49431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-04-29T20:45:58.533001Z","caller":"traceutil/trace.go:171","msg":"trace[176342803] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1453; }","duration":"147.61771ms","start":"2024-04-29T20:45:58.385363Z","end":"2024-04-29T20:45:58.532981Z","steps":["trace[176342803] 'range keys from in-memory index tree'  (duration: 147.415808ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:45:58.717909Z","caller":"traceutil/trace.go:171","msg":"trace[259978277] transaction","detail":"{read_only:false; response_revision:1454; number_of_response:1; }","duration":"179.638307ms","start":"2024-04-29T20:45:58.538241Z","end":"2024-04-29T20:45:58.71788Z","steps":["trace[259978277] 'process raft request'  (duration: 179.431405ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:45:58.85575Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.622912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:45:58.855965Z","caller":"traceutil/trace.go:171","msg":"trace[1396568622] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1454; }","duration":"115.880014ms","start":"2024-04-29T20:45:58.74007Z","end":"2024-04-29T20:45:58.85595Z","steps":["trace[1396568622] 'range keys from in-memory index tree'  (duration: 115.547212ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:09.855862Z","caller":"traceutil/trace.go:171","msg":"trace[811401261] transaction","detail":"{read_only:false; response_revision:1495; number_of_response:1; }","duration":"102.190223ms","start":"2024-04-29T20:46:09.753656Z","end":"2024-04-29T20:46:09.855846Z","steps":["trace[811401261] 'process raft request'  (duration: 102.095822ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:10.071953Z","caller":"traceutil/trace.go:171","msg":"trace[1996796465] transaction","detail":"{read_only:false; response_revision:1496; number_of_response:1; }","duration":"300.29343ms","start":"2024-04-29T20:46:09.77164Z","end":"2024-04-29T20:46:10.071933Z","steps":["trace[1996796465] 'process raft request'  (duration: 295.855603ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:46:10.072618Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:09.771623Z","time spent":"300.479031ms","remote":"127.0.0.1:50854","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2962,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-515700-m03\" mod_revision:1487 > success:<request_put:<key:\"/registry/minions/multinode-515700-m03\" value_size:2916 >> failure:<request_range:<key:\"/registry/minions/multinode-515700-m03\" > >"}
	{"level":"info","ts":"2024-04-29T20:46:15.569199Z","caller":"traceutil/trace.go:171","msg":"trace[1643861658] transaction","detail":"{read_only:false; response_revision:1503; number_of_response:1; }","duration":"218.350023ms","start":"2024-04-29T20:46:15.350828Z","end":"2024-04-29T20:46:15.569178Z","steps":["trace[1643861658] 'process raft request'  (duration: 218.141522ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:15.960586Z","caller":"traceutil/trace.go:171","msg":"trace[1497086569] linearizableReadLoop","detail":"{readStateIndex:1774; appliedIndex:1773; }","duration":"367.734728ms","start":"2024-04-29T20:46:15.592832Z","end":"2024-04-29T20:46:15.960567Z","steps":["trace[1497086569] 'read index received'  (duration: 332.248313ms)","trace[1497086569] 'applied index is now lower than readState.Index'  (duration: 35.485815ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T20:46:15.960951Z","caller":"traceutil/trace.go:171","msg":"trace[818980090] transaction","detail":"{read_only:false; response_revision:1504; number_of_response:1; }","duration":"594.879604ms","start":"2024-04-29T20:46:15.36606Z","end":"2024-04-29T20:46:15.96094Z","steps":["trace[818980090] 'process raft request'  (duration: 559.784592ms)","trace[818980090] 'compare'  (duration: 34.64431ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:46:15.961608Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:15.366043Z","time spent":"594.957105ms","remote":"127.0.0.1:50958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":569,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" mod_revision:1486 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" value_size:508 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" > >"}
	{"level":"warn","ts":"2024-04-29T20:46:15.962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"369.162137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-515700-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-04-29T20:46:15.96206Z","caller":"traceutil/trace.go:171","msg":"trace[601879282] range","detail":"{range_begin:/registry/minions/multinode-515700-m03; range_end:; response_count:1; response_revision:1504; }","duration":"369.225137ms","start":"2024-04-29T20:46:15.592827Z","end":"2024-04-29T20:46:15.962052Z","steps":["trace[601879282] 'agreement among raft nodes before linearized reading'  (duration: 369.135436ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:46:15.962525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:15.592782Z","time spent":"369.464038ms","remote":"127.0.0.1:50854","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3172,"request content":"key:\"/registry/minions/multinode-515700-m03\" "}
	{"level":"warn","ts":"2024-04-29T20:46:15.962622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.652243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:46:15.962781Z","caller":"traceutil/trace.go:171","msg":"trace[632284179] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1504; }","duration":"221.955444ms","start":"2024-04-29T20:46:15.740814Z","end":"2024-04-29T20:46:15.962769Z","steps":["trace[632284179] 'agreement among raft nodes before linearized reading'  (duration: 221.659043ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:47:27 up 24 min,  0 users,  load average: 1.04, 0.85, 0.52
	Linux multinode-515700 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11141cf0a01e] <==
	I0429 20:46:26.631145       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:46:36.638503       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:46:36.638612       1 main.go:227] handling current node
	I0429 20:46:36.638628       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:46:36.638637       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:46:46.656299       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:46:46.656436       1 main.go:227] handling current node
	I0429 20:46:46.656570       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:46:46.656663       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:46:56.671146       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:46:56.671240       1 main.go:227] handling current node
	I0429 20:46:56.671826       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:46:56.671845       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:47:06.678942       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:47:06.679299       1 main.go:227] handling current node
	I0429 20:47:06.679386       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:47:06.679521       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:47:16.695624       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:47:16.695746       1 main.go:227] handling current node
	I0429 20:47:16.695770       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:47:16.695784       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:47:26.710875       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:47:26.711712       1 main.go:227] handling current node
	I0429 20:47:26.711737       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:47:26.711793       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9b9ad8fbed85] <==
	I0429 20:25:08.456691       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 20:25:09.052862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 20:25:09.062497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 20:25:09.063038       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 20:25:10.434046       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 20:25:10.531926       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 20:25:10.667114       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 20:25:10.682682       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.241.25]
	I0429 20:25:10.685084       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 20:25:10.705095       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 20:25:11.202529       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 20:25:11.660474       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 20:25:11.702512       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 20:25:11.739640       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 20:25:25.195544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 20:25:25.294821       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 20:41:45.603992       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54600: use of closed network connection
	E0429 20:41:46.683622       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54606: use of closed network connection
	E0429 20:41:47.742503       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54616: use of closed network connection
	E0429 20:42:24.359204       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54636: use of closed network connection
	E0429 20:42:34.907983       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54638: use of closed network connection
	I0429 20:46:15.963628       1 trace.go:236] Trace[1378232527]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:bc84c8cc-c1e5-4f4d-8a1c-4ed7b226292a,client:172.17.240.210,api-group:coordination.k8s.io,api-version:v1,name:multinode-515700-m03,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-515700-m03,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (29-Apr-2024 20:46:15.363) (total time: 599ms):
	Trace[1378232527]: ["GuaranteedUpdate etcd3" audit-id:bc84c8cc-c1e5-4f4d-8a1c-4ed7b226292a,key:/leases/kube-node-lease/multinode-515700-m03,type:*coordination.Lease,resource:leases.coordination.k8s.io 599ms (20:46:15.364)
	Trace[1378232527]:  ---"Txn call completed" 598ms (20:46:15.963)]
	Trace[1378232527]: [599.725533ms] [599.725533ms] END
	
	
	==> kube-controller-manager [c5de44f1f106] <==
	I0429 20:25:25.137746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 20:25:25.742477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="536.801912ms"
	I0429 20:25:25.820241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.613668ms"
	I0429 20:25:25.820606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.801µs"
	I0429 20:25:26.647122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.452819ms"
	I0429 20:25:26.673190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.454556ms"
	I0429 20:25:26.673366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.301µs"
	I0429 20:25:35.442523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48µs"
	I0429 20:25:35.504302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.901µs"
	I0429 20:25:37.519404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.21268ms"
	I0429 20:25:37.519516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.698µs"
	I0429 20:25:39.495810       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 20:29:47.937478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.419556ms"
	I0429 20:29:47.961915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.36964ms"
	I0429 20:29:47.962862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.499µs"
	I0429 20:29:52.098445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.730146ms"
	I0429 20:29:52.098921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.902µs"
	I0429 20:46:05.025369       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-515700-m03\" does not exist"
	I0429 20:46:05.038750       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-515700-m03" podCIDRs=["10.244.1.0/24"]
	I0429 20:46:09.749698       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-515700-m03"
	I0429 20:46:28.280618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-515700-m03"
	I0429 20:46:28.324633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.8µs"
	I0429 20:46:28.354027       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.9µs"
	I0429 20:46:31.239793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.942065ms"
	I0429 20:46:31.240386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="306.702µs"
	
	
	==> kube-proxy [8d116812e2fa] <==
	I0429 20:25:27.278575       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:25:27.322396       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.241.25"]
	I0429 20:25:27.381777       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:25:27.381896       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:25:27.381924       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:25:27.389649       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:25:27.392153       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:25:27.392448       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:25:27.396161       1 config.go:192] "Starting service config controller"
	I0429 20:25:27.396372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:25:27.396564       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:25:27.396976       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:25:27.399035       1 config.go:319] "Starting node config controller"
	I0429 20:25:27.399236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:25:27.497521       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:25:27.497518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:25:27.500527       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7748681b165f] <==
	W0429 20:25:09.310708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.311983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.372121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.372287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.389043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:25:09.389975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 20:25:09.402308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.402357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.414781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:25:09.414997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:25:09.463545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:25:09.463684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:25:09.473360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 20:25:09.473524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 20:25:09.538214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 20:25:09.538587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 20:25:09.595918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.596510       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.751697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 20:25:09.752615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 20:25:09.794103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:25:09.794595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 20:25:09.800334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 20:25:09.800494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 20:25:11.092300       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:43:11 multinode-515700 kubelet[2116]: E0429 20:43:11.924530    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:43:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:43:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:43:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:43:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:44:11 multinode-515700 kubelet[2116]: E0429 20:44:11.928923    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:44:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:44:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:44:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:44:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:45:11 multinode-515700 kubelet[2116]: E0429 20:45:11.923458    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:45:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:45:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:45:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:45:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:46:11 multinode-515700 kubelet[2116]: E0429 20:46:11.926896    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:47:11 multinode-515700 kubelet[2116]: E0429 20:47:11.924357    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:47:19.011224    9380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700: (12.3537506s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-515700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (271.00s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (72.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status --output json --alsologtostderr
E0429 20:48:10.222749   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status --output json --alsologtostderr: exit status 2 (36.7258142s)

                                                
                                                
-- stdout --
	[{"Name":"multinode-515700","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-515700-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-515700-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:47:51.835545    1416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 20:47:51.926530    1416 out.go:291] Setting OutFile to fd 1888 ...
	I0429 20:47:51.926530    1416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:47:51.926530    1416 out.go:304] Setting ErrFile to fd 1468...
	I0429 20:47:51.926530    1416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:47:51.941532    1416 out.go:298] Setting JSON to true
	I0429 20:47:51.941532    1416 mustload.go:65] Loading cluster: multinode-515700
	I0429 20:47:51.941532    1416 notify.go:220] Checking for updates...
	I0429 20:47:51.942525    1416 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:47:51.942525    1416 status.go:255] checking status of multinode-515700 ...
	I0429 20:47:51.943530    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:47:54.152090    1416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:47:54.152090    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:47:54.152090    1416 status.go:330] multinode-515700 host status = "Running" (err=<nil>)
	I0429 20:47:54.152090    1416 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:47:54.152929    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:47:56.386011    1416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:47:56.386102    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:47:56.386102    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:47:59.000729    1416 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:47:59.000729    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:47:59.000729    1416 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:47:59.023983    1416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 20:47:59.024089    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:48:01.249510    1416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:48:01.250160    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:01.250243    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:48:03.906509    1416 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:48:03.907074    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:03.907142    1416 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:48:04.018029    1416 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9938671s)
	I0429 20:48:04.032369    1416 ssh_runner.go:195] Run: systemctl --version
	I0429 20:48:04.058349    1416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:48:04.088499    1416 kubeconfig.go:125] found "multinode-515700" server: "https://172.17.241.25:8443"
	I0429 20:48:04.088499    1416 api_server.go:166] Checking apiserver status ...
	I0429 20:48:04.100478    1416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:48:04.140661    1416 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup
	W0429 20:48:04.167896    1416 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:48:04.184531    1416 ssh_runner.go:195] Run: ls
	I0429 20:48:04.195150    1416 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:48:04.202534    1416 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:48:04.202534    1416 status.go:422] multinode-515700 apiserver status = Running (err=<nil>)
	I0429 20:48:04.202682    1416 status.go:257] multinode-515700 status: &{Name:multinode-515700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 20:48:04.202682    1416 status.go:255] checking status of multinode-515700-m02 ...
	I0429 20:48:04.202918    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:48:06.420153    1416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:48:06.420153    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:06.420153    1416 status.go:330] multinode-515700-m02 host status = "Running" (err=<nil>)
	I0429 20:48:06.420153    1416 host.go:66] Checking if "multinode-515700-m02" exists ...
	I0429 20:48:06.421899    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:48:08.592165    1416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:48:08.592165    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:08.592165    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:48:11.223241    1416 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:48:11.223846    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:11.223846    1416 host.go:66] Checking if "multinode-515700-m02" exists ...
	I0429 20:48:11.239046    1416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 20:48:11.239046    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:48:13.403678    1416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:48:13.403678    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:13.403678    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:48:16.094525    1416 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:48:16.095284    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:16.096040    1416 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:48:16.201430    1416 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9623479s)
	I0429 20:48:16.215462    1416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:48:16.243348    1416 status.go:257] multinode-515700-m02 status: &{Name:multinode-515700-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 20:48:16.243348    1416 status.go:255] checking status of multinode-515700-m03 ...
	I0429 20:48:16.244204    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:48:18.459942    1416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:48:18.459942    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:18.460479    1416 status.go:330] multinode-515700-m03 host status = "Running" (err=<nil>)
	I0429 20:48:18.460479    1416 host.go:66] Checking if "multinode-515700-m03" exists ...
	I0429 20:48:18.460794    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:48:20.728853    1416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:48:20.728914    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:20.728914    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:48:23.388115    1416 main.go:141] libmachine: [stdout =====>] : 172.17.240.210
	
	I0429 20:48:23.388115    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:23.388115    1416 host.go:66] Checking if "multinode-515700-m03" exists ...
	I0429 20:48:23.406150    1416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 20:48:23.406301    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:48:25.574666    1416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:48:25.575080    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:25.575141    1416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:48:28.245808    1416 main.go:141] libmachine: [stdout =====>] : 172.17.240.210
	
	I0429 20:48:28.245808    1416 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:48:28.246271    1416 sshutil.go:53] new ssh client: &{IP:172.17.240.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m03\id_rsa Username:docker}
	I0429 20:48:28.349683    1416 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9434958s)
	I0429 20:48:28.363489    1416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:48:28.393110    1416 status.go:257] multinode-515700-m03 status: &{Name:multinode-515700-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-515700 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700: (12.475847s)
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25: (8.7993868s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-515700 -- apply -f                   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:29 UTC | 29 Apr 24 20:29 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- rollout                    | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:29 UTC |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o                | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC | 29 Apr 24 20:42 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC | 29 Apr 24 20:42 UTC |
	|         | busybox-fc5497c4f-dv5v8                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec                       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC |                     |
	|         | busybox-fc5497c4f-dv5v8 -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.240.1                         |                  |                   |         |                     |                     |
	| node    | add -p multinode-515700 -v 3                      | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:43 UTC | 29 Apr 24 20:46 UTC |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:22:01
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:22:01.431751    6560 out.go:291] Setting OutFile to fd 1000 ...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.432590    6560 out.go:304] Setting ErrFile to fd 1156...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.463325    6560 out.go:298] Setting JSON to false
	I0429 20:22:01.467738    6560 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24060,"bootTime":1714398060,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 20:22:01.467738    6560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 20:22:01.473386    6560 out.go:177] * [multinode-515700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 20:22:01.477900    6560 notify.go:220] Checking for updates...
	I0429 20:22:01.480328    6560 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:22:01.485602    6560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:22:01.488123    6560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 20:22:01.490657    6560 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:22:01.493249    6560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:22:01.496241    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:22:01.497610    6560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:22:06.930154    6560 out.go:177] * Using the hyperv driver based on user configuration
	I0429 20:22:06.933587    6560 start.go:297] selected driver: hyperv
	I0429 20:22:06.933587    6560 start.go:901] validating driver "hyperv" against <nil>
	I0429 20:22:06.933587    6560 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:22:06.986262    6560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 20:22:06.987723    6560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:22:06.988334    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:22:06.988334    6560 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 20:22:06.988334    6560 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 20:22:06.988334    6560 start.go:340] cluster config:
	{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:22:06.988334    6560 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:22:06.992867    6560 out.go:177] * Starting "multinode-515700" primary control-plane node in "multinode-515700" cluster
	I0429 20:22:06.995976    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:22:06.996499    6560 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 20:22:06.996703    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:22:06.996741    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:22:06.996741    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:22:06.996741    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:22:06.996741    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json: {Name:mkdf346f9e30a055d7c79ffb416c8ce539e0c5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:360] acquireMachinesLock for multinode-515700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700"
	I0429 20:22:06.999081    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:22:06.999081    6560 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 20:22:07.006481    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:22:07.006790    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:22:07.006790    6560 client.go:168] LocalClient.Create starting
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:22:09.217702    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:22:09.217822    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:09.217951    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:22:11.056235    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:12.618512    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:16.461966    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:22:17.019827    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: Creating VM...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:20.140355    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:22:20.140483    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:22.004896    6560 main.go:141] libmachine: Creating VHD
	I0429 20:22:22.004896    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:22:25.795387    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DA11902-3EE7-4F99-A00A-752C0686FD99
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:22:25.796445    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:25.796496    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:22:25.796702    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:22:25.814462    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:22:29.034595    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:29.035273    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:29.035337    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -SizeBytes 20000MB
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:31.671427    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-515700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:35.461856    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700 -DynamicMemoryEnabled $false
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:37.723890    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700 -Count 2
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\boot2docker.iso'
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:42.558432    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd'
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:45.265400    6560 main.go:141] libmachine: Starting VM...
	I0429 20:22:45.265400    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:50.732199    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:50.733048    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:50.733149    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:54.308058    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:56.517062    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:59.110985    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:59.111613    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:00.127675    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:02.349860    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:05.987459    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:08.224322    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:10.790333    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:10.791338    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:11.803237    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:14.061252    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:18.855659    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:23:18.855911    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:21.063683    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:23.697335    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:23.697580    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:23.703285    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:23.713965    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:23.713965    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:23:23.854760    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:23:23.854760    6560 buildroot.go:166] provisioning hostname "multinode-515700"
	I0429 20:23:23.854760    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:26.029157    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:26.029995    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:26.030093    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:28.624899    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:28.625217    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:28.625495    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700 && echo "multinode-515700" | sudo tee /etc/hostname
	I0429 20:23:28.799265    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700
	
	I0429 20:23:28.799376    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:30.924333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:33.588985    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:33.589381    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:33.589381    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:23:33.743242    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:23:33.743242    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:23:33.743242    6560 buildroot.go:174] setting up certificates
	I0429 20:23:33.743242    6560 provision.go:84] configureAuth start
	I0429 20:23:33.743939    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:35.885562    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:38.477298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:40.581307    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:43.165623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:43.165853    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:43.165933    6560 provision.go:143] copyHostCerts
	I0429 20:23:43.166093    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:23:43.166093    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:23:43.166093    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:23:43.166722    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:23:43.168141    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:23:43.168305    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:23:43.168305    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:23:43.168887    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:23:43.169614    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:23:43.170245    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:23:43.170340    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:23:43.170731    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:23:43.171712    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700 san=[127.0.0.1 172.17.241.25 localhost minikube multinode-515700]
	I0429 20:23:43.368646    6560 provision.go:177] copyRemoteCerts
	I0429 20:23:43.382882    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:23:43.382882    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:45.539057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:48.109324    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:23:48.217340    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8343588s)
	I0429 20:23:48.217478    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:23:48.218375    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:23:48.267636    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:23:48.267636    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 20:23:48.316493    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:23:48.316784    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:23:48.372851    6560 provision.go:87] duration metric: took 14.6294509s to configureAuth
	I0429 20:23:48.372952    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:23:48.373086    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:23:48.373086    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:50.522765    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:50.522998    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:50.523146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:53.169650    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:53.170462    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:53.170462    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:23:53.302673    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:23:53.302726    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:23:53.302726    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:23:53.302726    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:55.434984    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:58.060160    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:58.061082    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:58.067077    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:58.068199    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:58.068292    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:23:58.226608    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:23:58.227212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:00.358933    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:02.944293    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:02.944373    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:02.950227    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:02.950958    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:02.950958    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:24:05.224184    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:24:05.224184    6560 machine.go:97] duration metric: took 46.3681587s to provisionDockerMachine
	I0429 20:24:05.224184    6560 client.go:171] duration metric: took 1m58.2164577s to LocalClient.Create
	I0429 20:24:05.224184    6560 start.go:167] duration metric: took 1m58.2164577s to libmachine.API.Create "multinode-515700"
	I0429 20:24:05.224184    6560 start.go:293] postStartSetup for "multinode-515700" (driver="hyperv")
	I0429 20:24:05.224184    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:24:05.241199    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:24:05.241199    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:07.393879    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:09.983789    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:09.984033    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:09.984469    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:10.092254    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8510176s)
	I0429 20:24:10.107982    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:24:10.116700    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:24:10.116700    6560 command_runner.go:130] > ID=buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:24:10.116700    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:24:10.116700    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:24:10.116700    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:24:10.117268    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:24:10.118515    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:24:10.118515    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:24:10.132514    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:24:10.152888    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:24:10.201665    6560 start.go:296] duration metric: took 4.9774423s for postStartSetup
	I0429 20:24:10.204966    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:12.345708    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:12.345785    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:12.345855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:14.957675    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:24:14.960758    6560 start.go:128] duration metric: took 2m7.9606641s to createHost
	I0429 20:24:14.962026    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:17.100197    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:17.100281    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:17.100354    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:19.725196    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:19.725860    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:19.725860    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:24:19.867560    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422259.868914581
	
	I0429 20:24:19.867560    6560 fix.go:216] guest clock: 1714422259.868914581
	I0429 20:24:19.867694    6560 fix.go:229] Guest: 2024-04-29 20:24:19.868914581 +0000 UTC Remote: 2024-04-29 20:24:14.9613787 +0000 UTC m=+133.724240401 (delta=4.907535881s)
	I0429 20:24:19.867694    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:22.005967    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:24.588016    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:24.588016    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:24.588016    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422259
	I0429 20:24:24.741766    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:24:19 UTC 2024
	
	I0429 20:24:24.741837    6560 fix.go:236] clock set: Mon Apr 29 20:24:19 UTC 2024
	 (err=<nil>)
	I0429 20:24:24.741837    6560 start.go:83] releasing machines lock for "multinode-515700", held for 2m17.7427319s
	I0429 20:24:24.742129    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:26.884301    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:29.475377    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:29.476046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:29.480912    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:24:29.481639    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:29.493304    6560 ssh_runner.go:195] Run: cat /version.json
	I0429 20:24:29.493304    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:31.702922    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:34.435635    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.436190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.436258    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.480228    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.481073    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.481135    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.531424    6560 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 20:24:34.531720    6560 ssh_runner.go:235] Completed: cat /version.json: (5.0383759s)
	I0429 20:24:34.545943    6560 ssh_runner.go:195] Run: systemctl --version
	I0429 20:24:34.614256    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:24:34.615354    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1343125s)
	I0429 20:24:34.615354    6560 command_runner.go:130] > systemd 252 (252)
	I0429 20:24:34.615354    6560 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 20:24:34.630005    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:24:34.639051    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 20:24:34.639955    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:24:34.653590    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:24:34.683800    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:24:34.683903    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:24:34.683903    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:34.684139    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:34.720958    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:24:34.735137    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:24:34.769077    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:24:34.791121    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:24:34.804751    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:24:34.838781    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.871052    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:24:34.905043    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.940043    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:24:34.975295    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:24:35.009502    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:24:35.044104    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:24:35.078095    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:24:35.099570    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:24:35.114246    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:24:35.146794    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:35.365920    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:24:35.402710    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:35.417050    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Unit]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:24:35.443946    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:24:35.443946    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:24:35.443946    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Service]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Type=notify
	I0429 20:24:35.443946    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:24:35.443946    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:24:35.443946    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:24:35.443946    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:24:35.443946    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:24:35.443946    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:24:35.443946    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:24:35.443946    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:24:35.443946    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:24:35.443946    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:24:35.443946    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:24:35.443946    6560 command_runner.go:130] > Delegate=yes
	I0429 20:24:35.443946    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:24:35.443946    6560 command_runner.go:130] > KillMode=process
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Install]
	I0429 20:24:35.444947    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:24:35.457957    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.500818    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:24:35.548559    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.585869    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.622879    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:24:35.694256    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.721660    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:35.757211    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:24:35.773795    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:24:35.779277    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:24:35.793892    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:24:35.813834    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:24:35.865638    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:24:36.085117    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:24:36.291781    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:24:36.291781    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:24:36.337637    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:36.567033    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:39.106704    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5396504s)
	I0429 20:24:39.121937    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 20:24:39.164421    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.201973    6560 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 20:24:39.432817    6560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 20:24:39.648494    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:39.872471    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 20:24:39.918782    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.959078    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:40.189711    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 20:24:40.314827    6560 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 20:24:40.327765    6560 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 20:24:40.337989    6560 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 20:24:40.338077    6560 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 20:24:40.338077    6560 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Modify: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Change: 2024-04-29 20:24:40.227771386 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] >  Birth: -
	I0429 20:24:40.338228    6560 start.go:562] Will wait 60s for crictl version
	I0429 20:24:40.353543    6560 ssh_runner.go:195] Run: which crictl
	I0429 20:24:40.359551    6560 command_runner.go:130] > /usr/bin/crictl
	I0429 20:24:40.372542    6560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:24:40.422534    6560 command_runner.go:130] > Version:  0.1.0
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeName:  docker
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 20:24:40.422534    6560 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 20:24:40.433531    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.468470    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.477791    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.510922    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.518057    6560 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 20:24:40.518283    6560 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: 172.17.240.1/20
	I0429 20:24:40.538782    6560 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 20:24:40.546082    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:40.569927    6560 kubeadm.go:877] updating cluster {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:24:40.570125    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:24:40.581034    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:40.605162    6560 docker.go:685] Got preloaded images: 
	I0429 20:24:40.605162    6560 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 20:24:40.617894    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:40.637456    6560 command_runner.go:139] > {"Repositories":{}}
	I0429 20:24:40.652557    6560 ssh_runner.go:195] Run: which lz4
	I0429 20:24:40.659728    6560 command_runner.go:130] > /usr/bin/lz4
	I0429 20:24:40.659728    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 20:24:40.676390    6560 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:24:40.682600    6560 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 20:24:43.151463    6560 docker.go:649] duration metric: took 2.4917153s to copy over tarball
	I0429 20:24:43.166991    6560 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:24:51.777678    6560 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6106197s)
	I0429 20:24:51.777678    6560 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:24:51.848689    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:51.869772    6560 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0429 20:24:51.869772    6560 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 20:24:51.923721    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:52.150884    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:55.504316    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3534062s)
	I0429 20:24:55.515091    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:24:55.540357    6560 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 20:24:55.540357    6560 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:24:55.540557    6560 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 20:24:55.540557    6560 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:24:55.540557    6560 kubeadm.go:928] updating node { 172.17.241.25 8443 v1.30.0 docker true true} ...
	I0429 20:24:55.540557    6560 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-515700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.241.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:24:55.550945    6560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 20:24:55.586940    6560 command_runner.go:130] > cgroupfs
	I0429 20:24:55.587354    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:24:55.587354    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:24:55.587354    6560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:24:55.587354    6560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.241.25 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-515700 NodeName:multinode-515700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.241.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.241.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:24:55.587882    6560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.241.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-515700"
	  kubeletExtraArgs:
	    node-ip: 172.17.241.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.241.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:24:55.601173    6560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubeadm
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubectl
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubelet
	I0429 20:24:55.622022    6560 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:24:55.633924    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:24:55.654273    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 20:24:55.692289    6560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:24:55.726319    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:24:55.774801    6560 ssh_runner.go:195] Run: grep 172.17.241.25	control-plane.minikube.internal$ /etc/hosts
	I0429 20:24:55.781653    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.241.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:55.820570    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:56.051044    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:24:56.087660    6560 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700 for IP: 172.17.241.25
	I0429 20:24:56.087753    6560 certs.go:194] generating shared ca certs ...
	I0429 20:24:56.087824    6560 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 20:24:56.089063    6560 certs.go:256] generating profile certs ...
	I0429 20:24:56.089855    6560 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key
	I0429 20:24:56.089855    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt with IP's: []
	I0429 20:24:56.283640    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt ...
	I0429 20:24:56.284633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt: {Name:mk1286f657dae134d1e4806ec4fc1d780c02f0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.285633    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key ...
	I0429 20:24:56.285633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key: {Name:mka98d4501f3f942abed1092b1c97c4a2bbd30cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.286633    6560 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d
	I0429 20:24:56.287300    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.241.25]
	I0429 20:24:56.456862    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d ...
	I0429 20:24:56.456862    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d: {Name:mk09d828589f59d94791e90fc999c9ce1101118e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d ...
	I0429 20:24:56.458476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d: {Name:mk92ebf0409a99e4a3e3430ff86080f164f4bc0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458796    6560 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt
	I0429 20:24:56.473961    6560 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key
	I0429 20:24:56.474965    6560 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key
	I0429 20:24:56.474965    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt with IP's: []
	I0429 20:24:56.680472    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt ...
	I0429 20:24:56.680472    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt: {Name:mkc600562c7738e3eec9de4025428e3048df463a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.682476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key ...
	I0429 20:24:56.682476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key: {Name:mkc9ba6e1afbc9ca05ce8802b568a72bfd19a90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 20:24:56.685482    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 20:24:56.693323    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 20:24:56.701358    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 20:24:56.702409    6560 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 20:24:56.702718    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 20:24:56.702843    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 20:24:56.705315    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:24:56.758912    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 20:24:56.809584    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:24:56.860874    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:24:56.918708    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:24:56.969377    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:24:57.018903    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:24:57.070438    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:24:57.119823    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:24:57.168671    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 20:24:57.216697    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 20:24:57.263605    6560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:24:57.314590    6560 ssh_runner.go:195] Run: openssl version
	I0429 20:24:57.325614    6560 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 20:24:57.340061    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:24:57.374639    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.394971    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.404667    6560 command_runner.go:130] > b5213941
	I0429 20:24:57.419162    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:24:57.454540    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 20:24:57.494441    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.501867    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.502224    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.517134    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.527174    6560 command_runner.go:130] > 51391683
	I0429 20:24:57.544472    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 20:24:57.579789    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 20:24:57.613535    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622605    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622696    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.637764    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.649176    6560 command_runner.go:130] > 3ec20f2e
	I0429 20:24:57.665410    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:24:57.708796    6560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:24:57.716466    6560 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717133    6560 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717510    6560 kubeadm.go:391] StartCluster: {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:24:57.729105    6560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 20:24:57.771112    6560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 20:24:57.792952    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0429 20:24:57.807601    6560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:24:57.837965    6560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 20:24:57.856820    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:156] found existing configuration files:
	
	I0429 20:24:57.872870    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:24:57.892109    6560 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.892549    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.905782    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:24:57.939062    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:24:57.957607    6560 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.957753    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.972479    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:24:58.006849    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.025918    6560 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.025918    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.039054    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.072026    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:24:58.092314    6560 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.092673    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.105776    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:24:58.124274    6560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:24:58.562957    6560 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:24:58.562957    6560 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:25:12.186137    6560 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186137    6560 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186277    6560 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 20:25:12.186320    6560 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:25:12.186516    6560 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.187085    6560 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.187131    6560 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.190071    6560 out.go:204]   - Generating certificates and keys ...
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190667    6560 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.191251    6560 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191251    6560 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 20:25:12.193040    6560 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193086    6560 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193701    6560 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193701    6560 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.198949    6560 out.go:204]   - Booting up control plane ...
	I0429 20:25:12.193843    6560 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199855    6560 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:25:12.200494    6560 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200494    6560 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201207    6560 command_runner.go:130] > [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.202201    6560 command_runner.go:130] > [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 out.go:204]   - Configuring RBAC rules ...
	I0429 20:25:12.202201    6560 command_runner.go:130] > [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.204361    6560 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.206433    6560 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206433    6560 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206983    6560 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 kubeadm.go:309] 
	I0429 20:25:12.207142    6560 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 kubeadm.go:309] 
	I0429 20:25:12.207365    6560 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207404    6560 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207464    6560 kubeadm.go:309] 
	I0429 20:25:12.207514    6560 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0429 20:25:12.207589    6560 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:25:12.207764    6560 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.207807    6560 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.208030    6560 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 kubeadm.go:309] 
	I0429 20:25:12.208230    6560 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208230    6560 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208281    6560 kubeadm.go:309] 
	I0429 20:25:12.208375    6560 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208375    6560 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208442    6560 kubeadm.go:309] 
	I0429 20:25:12.208643    6560 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0429 20:25:12.208733    6560 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:25:12.208874    6560 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.208936    6560 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.209014    6560 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209014    6560 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209735    6560 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209735    6560 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--control-plane 
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--control-plane 
	I0429 20:25:12.210277    6560 kubeadm.go:309] 
	I0429 20:25:12.210538    6560 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] 
	I0429 20:25:12.210726    6560 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210726    6560 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210937    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:25:12.211197    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:25:12.215717    6560 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 20:25:12.234164    6560 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 20:25:12.242817    6560 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: 2024-04-29 20:23:14.801002600 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Change: 2024-04-29 20:23:06.257000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] >  Birth: -
	I0429 20:25:12.242817    6560 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 20:25:12.242817    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 20:25:12.301387    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 20:25:13.060621    6560 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > serviceaccount/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > daemonset.apps/kindnet created
	I0429 20:25:13.060707    6560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-515700 minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=multinode-515700 minikube.k8s.io/primary=true
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.092072    6560 command_runner.go:130] > -16
	I0429 20:25:13.092113    6560 ops.go:34] apiserver oom_adj: -16
	I0429 20:25:13.290753    6560 command_runner.go:130] > node/multinode-515700 labeled
	I0429 20:25:13.292700    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0429 20:25:13.306335    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.426974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:13.819653    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.947766    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.320587    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.442246    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.822864    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.943107    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.309117    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.432718    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.814070    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.933861    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.317878    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.440680    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.819594    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.942387    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.322995    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.435199    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.809136    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.932465    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.308164    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.429047    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.808817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.928476    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.310090    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.432479    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.815590    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.929079    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.321723    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.442512    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.819466    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.933742    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.309314    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.424974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.811819    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.952603    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.316794    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.432125    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.808890    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.925838    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.310021    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.434432    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.819369    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.948876    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.307817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.457947    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.818980    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.932003    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:25.308659    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:25.488149    6560 command_runner.go:130] > NAME      SECRETS   AGE
	I0429 20:25:25.488217    6560 command_runner.go:130] > default   0         1s
	I0429 20:25:25.489686    6560 kubeadm.go:1107] duration metric: took 12.4288824s to wait for elevateKubeSystemPrivileges
	W0429 20:25:25.489686    6560 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:25:25.489686    6560 kubeadm.go:393] duration metric: took 27.7719601s to StartCluster
	I0429 20:25:25.490694    6560 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.490694    6560 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:25.491677    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.493697    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 20:25:25.493697    6560 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:25:25.498680    6560 out.go:177] * Verifying Kubernetes components...
	I0429 20:25:25.493697    6560 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:25:25.494664    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:25.504657    6560 addons.go:69] Setting storage-provisioner=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:69] Setting default-storageclass=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:234] Setting addon storage-provisioner=true in "multinode-515700"
	I0429 20:25:25.504657    6560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-515700"
	I0429 20:25:25.504657    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.520673    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:25:25.944109    6560 command_runner.go:130] > apiVersion: v1
	I0429 20:25:25.944267    6560 command_runner.go:130] > data:
	I0429 20:25:25.944267    6560 command_runner.go:130] >   Corefile: |
	I0429 20:25:25.944367    6560 command_runner.go:130] >     .:53 {
	I0429 20:25:25.944367    6560 command_runner.go:130] >         errors
	I0429 20:25:25.944367    6560 command_runner.go:130] >         health {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            lameduck 5s
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         ready
	I0429 20:25:25.944367    6560 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            pods insecure
	I0429 20:25:25.944367    6560 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0429 20:25:25.944367    6560 command_runner.go:130] >            ttl 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         prometheus :9153
	I0429 20:25:25.944367    6560 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            max_concurrent 1000
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         cache 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loop
	I0429 20:25:25.944367    6560 command_runner.go:130] >         reload
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loadbalance
	I0429 20:25:25.944367    6560 command_runner.go:130] >     }
	I0429 20:25:25.944367    6560 command_runner.go:130] > kind: ConfigMap
	I0429 20:25:25.944367    6560 command_runner.go:130] > metadata:
	I0429 20:25:25.944367    6560 command_runner.go:130] >   creationTimestamp: "2024-04-29T20:25:11Z"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   name: coredns
	I0429 20:25:25.944367    6560 command_runner.go:130] >   namespace: kube-system
	I0429 20:25:25.944367    6560 command_runner.go:130] >   resourceVersion: "265"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   uid: af2c186a-a14a-4671-8545-05c5ef5d4a89
	I0429 20:25:25.949389    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 20:25:26.023682    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:25:26.408680    6560 command_runner.go:130] > configmap/coredns replaced
	I0429 20:25:26.414254    6560 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.417677    6560 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 20:25:26.417677    6560 node_ready.go:35] waiting up to 6m0s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.435291    6560 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.437034    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.438430    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Audit-Id: a2ae57e5-53a3-4342-ad5c-c2149e87ef04
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438430    6560 round_trippers.go:580]     Audit-Id: 2e6b22a8-9874-417c-a6a5-f7b7437121f7
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438909    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.439086    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:26.440203    6560 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.440298    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.440406    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.440406    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.440519    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:26.440519    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.459913    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.459962    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Audit-Id: 9ca07d91-957f-4992-9642-97b01e07dde3
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.459962    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"393","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.918339    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.918339    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918339    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918339    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.918300    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.918498    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918580    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918580    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Audit-Id: 70383541-35df-461a-b4fb-41bd3b56f11d
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Audit-Id: e628428d-1384-4709-a32e-084c9dfec614
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.929164    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"404","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.929400    6560 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-515700" context rescaled to 1 replicas
	I0429 20:25:26.929400    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.426913    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.426913    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.426913    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.426913    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.430795    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:27.430795    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Audit-Id: e4e6b2b1-e008-4f2a-bae4-3596fce97666
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.430996    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.431340    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.789217    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.789348    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.792426    6560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:25:27.791141    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:27.795103    6560 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:27.795205    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:25:27.795205    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.795205    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:27.795924    6560 addons.go:234] Setting addon default-storageclass=true in "multinode-515700"
	I0429 20:25:27.795924    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:27.796802    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.922993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.923088    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.923175    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.923175    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.929435    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:27.929435    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.929545    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.929545    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.929638    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Audit-Id: 8ef77f9f-d18f-4fa7-ab77-85c137602c84
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.930046    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.432611    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.432611    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.432611    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.432611    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.441320    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:28.441862    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Audit-Id: d32cd9f8-494c-4a69-b028-606c7d354657
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.442308    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.442914    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:28.927674    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.927674    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.927674    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.927897    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.933213    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:28.933794    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Audit-Id: 75d40b2c-c2ed-4221-9361-88591791a649
	I0429 20:25:28.934208    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.422724    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.422898    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.422898    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.422975    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.426431    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:29.426876    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Audit-Id: dde47b6c-069b-408d-a5c6-0a2ea7439643
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.427261    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.918308    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.918308    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.918308    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.918407    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.921072    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:29.921072    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.921072    6560 round_trippers.go:580]     Audit-Id: d4643df6-68ad-4c4c-9604-a5a4d019fba1
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.922076    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.057466    6560 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:30.057636    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:25:30.057750    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:30.145026    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:30.424041    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.424310    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.424310    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.424310    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.428606    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.429051    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.429263    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Audit-Id: 2c59a467-8079-41ed-ac1d-f96dd660d343
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.429435    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.931993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.931993    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.931993    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.931993    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.936635    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.936635    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.937644    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Audit-Id: 9214de5b-8221-4c68-b6b9-a92fe7d41fd1
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.938175    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.939066    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:31.423866    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.423866    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.423866    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.423988    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.427054    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.427827    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.427827    6560 round_trippers.go:580]     Audit-Id: 5f66acb8-ef38-4220-83b6-6e87fbec6f58
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.427869    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:31.932664    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.932664    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.932761    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.932761    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.936680    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.936680    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Audit-Id: f9fb721e-ccaf-4e33-ac69-8ed840761191
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.937009    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.312723    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:32.424680    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.424953    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.424953    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.424953    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.428624    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:32.428906    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.428906    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Audit-Id: d3a39f3a-571d-46c0-a442-edf136da8a11
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.429531    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.858444    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:32.926226    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.926317    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.926393    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.926393    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.929204    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:32.929583    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.929583    6560 round_trippers.go:580]     Audit-Id: 55fc987d-65c0-4ac8-95d2-7fa4185e179b
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.929734    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.930327    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.034553    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:33.425759    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.425833    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.425833    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.425833    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.428624    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.429656    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Audit-Id: d581fce7-8906-48d7-8e13-2d1aba9dec04
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.429916    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.430438    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:33.930984    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.931053    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.931053    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.931053    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.933717    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.933717    6560 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.933717    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.933717    6560 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Audit-Id: 680ed792-db71-4b29-abb9-40f7154e8b1e
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.933717    6560 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0429 20:25:33.933717    6560 command_runner.go:130] > pod/storage-provisioner created
	I0429 20:25:33.933717    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.428102    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.428102    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.428102    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.428102    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.431722    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:34.432624    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Audit-Id: 86cc0608-3000-42b0-9ce8-4223e32d60c3
	I0429 20:25:34.432684    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.433082    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.932029    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.932316    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.932316    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.932316    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.936749    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:34.936749    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Audit-Id: 0e63a4db-3dd4-4e74-8b79-c019b6b97e89
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.937415    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:35.024893    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:35.025151    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:35.025317    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:35.170634    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:35.371184    6560 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0429 20:25:35.371418    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 20:25:35.371571    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.371571    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.371571    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.380781    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:35.381213    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Audit-Id: 31f5e265-3d38-4520-88d0-33f47325189c
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Length: 1273
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.381380    6560 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 20:25:35.382106    6560 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.382183    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 20:25:35.382183    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:35.382269    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.390758    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.390758    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.390758    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Audit-Id: 4dbb716e-2d97-4c38-b342-f63e7d38a4d0
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Length: 1220
	I0429 20:25:35.391190    6560 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.395279    6560 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 20:25:35.397530    6560 addons.go:505] duration metric: took 9.9037568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 20:25:35.421733    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.421733    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.421733    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.421733    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.452743    6560 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 20:25:35.452743    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.452743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Audit-Id: 316d0393-7ba5-4629-87cb-7ae54d0ea965
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.454477    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.455068    6560 node_ready.go:49] node "multinode-515700" has status "Ready":"True"
	I0429 20:25:35.455148    6560 node_ready.go:38] duration metric: took 9.0374019s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:35.455148    6560 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:35.455213    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:35.455213    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.455213    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.455213    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.473128    6560 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 20:25:35.473128    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Audit-Id: 81e159c0-b703-47ba-a9f3-82cc907b8705
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.475820    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"431","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0429 20:25:35.481714    6560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:35.482325    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.482379    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.482379    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.482432    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.491093    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.491093    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Audit-Id: a2eb7ca2-d415-4a7c-a1f0-1ac743bd8f82
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.492090    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.493335    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.493335    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.493335    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.493419    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.496084    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:35.496084    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.496084    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Audit-Id: f61c97ad-ee7a-4666-9244-d7d2091b5d09
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.497097    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.497131    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.497332    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.991323    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.991323    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.991323    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.991323    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.995451    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:35.995451    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Audit-Id: faa8a1a4-279f-4dc3-99c8-8c3b9e9ed746
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:35.996592    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.997239    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.997292    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.997292    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.997292    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.999987    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:35.999987    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Audit-Id: 070c7fff-f707-4b9a-9aef-031cedc68a8c
	I0429 20:25:36.000411    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.483004    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.483004    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.483004    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.483004    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.488152    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:36.488152    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.488152    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.488678    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Audit-Id: fb5cc675-b39d-4cb0-ba8c-24140b3d95e8
	I0429 20:25:36.489818    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.490926    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.490926    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.490985    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.490985    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.494654    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:36.494654    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Audit-Id: fe6d880a-4cf8-4b10-8c7f-debde123eafc
	I0429 20:25:36.495423    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.991643    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.991643    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.991643    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.991855    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.996384    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:36.996384    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Audit-Id: 933a6dd5-a0f7-4380-8189-3e378a8a620d
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:36.997332    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.997760    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.997760    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.997760    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.997760    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.000889    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.000889    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.001211    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Audit-Id: 0342e743-45a6-4fd7-97be-55a766946396
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.001274    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.001759    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.495712    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:37.495712    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.495712    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.495712    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.508671    6560 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 20:25:37.508671    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Audit-Id: d30c6154-a41b-4a0d-976f-d19f40e67223
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.508671    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0429 20:25:37.510663    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.510663    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.510663    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.510663    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.513686    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.513686    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Audit-Id: 397b83a5-95f9-4df8-a76b-042ecc96922c
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.514662    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.514662    6560 pod_ready.go:92] pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.514662    6560 pod_ready.go:81] duration metric: took 2.0329329s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-515700
	I0429 20:25:37.514662    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.514662    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.514662    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.517691    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.517691    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Audit-Id: df53f071-06ed-4797-a51b-7d01b84cac86
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.518412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-515700","namespace":"kube-system","uid":"85f2dc9a-17b5-413c-ab83-d3cbe955571e","resourceVersion":"319","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.241.25:2379","kubernetes.io/config.hash":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.mirror":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.seen":"2024-04-29T20:25:11.718525866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0429 20:25:37.519044    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.519044    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.519124    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.519124    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.521788    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.521788    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Audit-Id: ee5fdb3e-9869-4cd7-996a-a25b453aeb6b
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.521944    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.522769    6560 pod_ready.go:92] pod "etcd-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.522844    6560 pod_ready.go:81] duration metric: took 8.1819ms for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.522844    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.523015    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-515700
	I0429 20:25:37.523015    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.523079    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.523079    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.525575    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.525575    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Audit-Id: cd9d851c-f606-48c9-8da3-3d194ab5464f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.526015    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-515700","namespace":"kube-system","uid":"f5a212eb-87a9-476a-981a-9f31731f39e6","resourceVersion":"312","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.241.25:8443","kubernetes.io/config.hash":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.mirror":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.seen":"2024-04-29T20:25:11.718530866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0429 20:25:37.526356    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.526356    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.526356    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.526356    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.535954    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:37.535954    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Audit-Id: 018aa21f-d408-4777-b54c-eb7aa2295a34
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.536470    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.536974    6560 pod_ready.go:92] pod "kube-apiserver-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.537034    6560 pod_ready.go:81] duration metric: took 14.0881ms for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537034    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537183    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-515700
	I0429 20:25:37.537276    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.537297    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.537297    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.539964    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.539964    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Audit-Id: d3232756-fc07-4b33-a3b5-989d2932cec4
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.541274    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-515700","namespace":"kube-system","uid":"2c9ba563-c2af-45b7-bc1e-bf39759a339b","resourceVersion":"315","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.mirror":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.seen":"2024-04-29T20:25:11.718532066Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0429 20:25:37.541935    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.541935    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.541935    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.541935    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.555960    6560 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 20:25:37.555960    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Audit-Id: 2d117219-3b1a-47fe-99a4-7e5aea7e84d3
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.555960    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.555960    6560 pod_ready.go:92] pod "kube-controller-manager-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.555960    6560 pod_ready.go:81] duration metric: took 18.9251ms for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.555960    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.556943    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gx5x
	I0429 20:25:37.556943    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.556943    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.556943    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.559965    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.560477    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.560477    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Audit-Id: 14e6d1be-eac6-4f20-9621-b409c951fae1
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.560781    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6gx5x","generateName":"kube-proxy-","namespace":"kube-system","uid":"886ac698-7e9b-431b-b822-577331b02f41","resourceVersion":"407","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"027f1d05-009f-4199-81e9-45b0a2d3710f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"027f1d05-009f-4199-81e9-45b0a2d3710f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0429 20:25:37.561552    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.561581    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.561581    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.561581    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.567713    6560 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 20:25:37.567713    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Audit-Id: 678df177-6944-4d30-b889-62528c06bab2
	I0429 20:25:37.567713    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.568391    6560 pod_ready.go:92] pod "kube-proxy-6gx5x" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.568391    6560 pod_ready.go:81] duration metric: took 12.4313ms for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.568391    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.701559    6560 request.go:629] Waited for 132.9214ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701779    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701853    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.701853    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.701853    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.705314    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.706129    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.706129    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.706183    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Audit-Id: 4fb010ad-4d68-4aa0-9ba4-f68d04faa9e8
	I0429 20:25:37.706412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-515700","namespace":"kube-system","uid":"096d3e94-25ba-49b3-b329-6fb47fc88f25","resourceVersion":"334","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.mirror":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.seen":"2024-04-29T20:25:11.718533166Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0429 20:25:37.905204    6560 request.go:629] Waited for 197.8802ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.905322    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.905466    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.909057    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.909159    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Audit-Id: a6cecf7e-83ad-4d5f-8cbb-a65ced7e83ce
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.909286    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.909697    6560 pod_ready.go:92] pod "kube-scheduler-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.909697    6560 pod_ready.go:81] duration metric: took 341.3037ms for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.909697    6560 pod_ready.go:38] duration metric: took 2.4545299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:37.909697    6560 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:25:37.923721    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:25:37.956142    6560 command_runner.go:130] > 2047
	I0429 20:25:37.956226    6560 api_server.go:72] duration metric: took 12.462433s to wait for apiserver process to appear ...
	I0429 20:25:37.956226    6560 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:25:37.956330    6560 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:25:37.965150    6560 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:25:37.965332    6560 round_trippers.go:463] GET https://172.17.241.25:8443/version
	I0429 20:25:37.965364    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.965364    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.965364    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.967124    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:37.967124    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Audit-Id: c3b17e5f-8eb5-4422-bcd1-48cea5831311
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.967423    6560 round_trippers.go:580]     Content-Length: 263
	I0429 20:25:37.967423    6560 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 20:25:37.967530    6560 api_server.go:141] control plane version: v1.30.0
	I0429 20:25:37.967530    6560 api_server.go:131] duration metric: took 11.2306ms to wait for apiserver health ...
	I0429 20:25:37.967629    6560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:25:38.109818    6560 request.go:629] Waited for 142.1878ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110201    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110256    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.110275    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.110275    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.118070    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.118070    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Audit-Id: 557b3073-d14e-4919-8133-995d5b042d22
	I0429 20:25:38.119823    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.123197    6560 system_pods.go:59] 8 kube-system pods found
	I0429 20:25:38.123197    6560 system_pods.go:61] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.123197    6560 system_pods.go:74] duration metric: took 155.566ms to wait for pod list to return data ...
	I0429 20:25:38.123197    6560 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:25:38.295950    6560 request.go:629] Waited for 172.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.296300    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.296300    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.300424    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:38.300424    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Content-Length: 261
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Audit-Id: 7466bf5b-fa07-4a6b-bc06-274738fc9259
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.300674    6560 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"13c4332f-9236-4f04-9e46-f5a98bc3d731","resourceVersion":"343","creationTimestamp":"2024-04-29T20:25:24Z"}}]}
	I0429 20:25:38.300674    6560 default_sa.go:45] found service account: "default"
	I0429 20:25:38.300674    6560 default_sa.go:55] duration metric: took 177.4758ms for default service account to be created ...
	I0429 20:25:38.300674    6560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:25:38.498686    6560 request.go:629] Waited for 197.291ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.498782    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.499005    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.499005    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.499005    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.506756    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.507387    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Audit-Id: ffc5efdb-4263-4450-8ff2-c1bb3f979300
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.507485    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.507503    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.508809    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.512231    6560 system_pods.go:86] 8 kube-system pods found
	I0429 20:25:38.512305    6560 system_pods.go:89] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.512305    6560 system_pods.go:89] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.512451    6560 system_pods.go:89] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.512451    6560 system_pods.go:126] duration metric: took 211.7756ms to wait for k8s-apps to be running ...
	I0429 20:25:38.512451    6560 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:25:38.526027    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:25:38.555837    6560 system_svc.go:56] duration metric: took 43.3852ms WaitForService to wait for kubelet
	I0429 20:25:38.555837    6560 kubeadm.go:576] duration metric: took 13.0620394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:25:38.556007    6560 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:25:38.701455    6560 request.go:629] Waited for 145.1917ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701896    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701917    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.701917    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.702032    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.709221    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.709221    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Audit-Id: 9241b2a0-c483-4bfb-8a19-8f5c9b610b53
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.709221    6560 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0429 20:25:38.710061    6560 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:25:38.710061    6560 node_conditions.go:123] node cpu capacity is 2
	I0429 20:25:38.710061    6560 node_conditions.go:105] duration metric: took 154.0529ms to run NodePressure ...
	I0429 20:25:38.710061    6560 start.go:240] waiting for startup goroutines ...
	I0429 20:25:38.710061    6560 start.go:245] waiting for cluster config update ...
	I0429 20:25:38.710061    6560 start.go:254] writing updated cluster config ...
	I0429 20:25:38.717493    6560 out.go:177] 
	I0429 20:25:38.721129    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.735840    6560 out.go:177] * Starting "multinode-515700-m02" worker node in "multinode-515700" cluster
	I0429 20:25:38.738518    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:25:38.738518    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:25:38.738983    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:25:38.739240    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:25:38.739481    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.745029    6560 start.go:360] acquireMachinesLock for multinode-515700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:25:38.745029    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700-m02"
	I0429 20:25:38.745029    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 20:25:38.745575    6560 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 20:25:38.748852    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:25:38.748852    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:25:38.748852    6560 client.go:168] LocalClient.Create starting
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:40.746212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:25:42.605453    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:47.992432    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:47.992702    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:47.996014    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:25:48.551162    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: Creating VM...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:51.874174    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:25:51.874221    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:53.736899    6560 main.go:141] libmachine: Creating VHD
	I0429 20:25:53.737514    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D65FFD0C-285E-44D0-8723-21544BDDE71A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:25:57.529054    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:00.734035    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -SizeBytes 20000MB
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:03.314283    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-515700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700-m02 -DynamicMemoryEnabled $false
	I0429 20:26:09.480100    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700-m02 -Count 2
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:11.716979    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\boot2docker.iso'
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:14.377298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd'
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:17.090909    6560 main.go:141] libmachine: Starting VM...
	I0429 20:26:17.090909    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700-m02
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:22.527096    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:26.113296    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:28.339433    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:30.953587    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:30.953628    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:31.955478    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:34.197688    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:34.197831    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:34.197901    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:37.817016    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:42.682666    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:42.683603    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:43.685897    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:48.604877    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:48.604915    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:48.604999    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:50.794876    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:50.795093    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:50.795407    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:26:50.795407    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:52.992195    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:52.992243    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:52.992331    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:55.630552    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:55.641728    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:26:55.642758    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:26:55.769182    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:26:55.769182    6560 buildroot.go:166] provisioning hostname "multinode-515700-m02"
	I0429 20:26:55.769333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:57.942857    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:57.943721    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:57.943789    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:00.610012    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:00.610498    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:00.617342    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:00.618022    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:00.618022    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700-m02 && echo "multinode-515700-m02" | sudo tee /etc/hostname
	I0429 20:27:00.774430    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700-m02
	
	I0429 20:27:00.775391    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:02.970796    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:02.971352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:02.971577    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:05.640782    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:05.640782    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:05.640782    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:27:05.779330    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:27:05.779330    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:27:05.779435    6560 buildroot.go:174] setting up certificates
	I0429 20:27:05.779435    6560 provision.go:84] configureAuth start
	I0429 20:27:05.779531    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:07.939785    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:10.607752    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:10.608236    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:10.608319    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:15.428095    6560 provision.go:143] copyHostCerts
	I0429 20:27:15.429066    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:27:15.429066    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:27:15.429066    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:27:15.429626    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:27:15.430936    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:27:15.431366    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:27:15.431366    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:27:15.431875    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:27:15.432822    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:27:15.433064    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:27:15.433064    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:27:15.433498    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:27:15.434807    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700-m02 san=[127.0.0.1 172.17.253.145 localhost minikube multinode-515700-m02]
	I0429 20:27:15.511954    6560 provision.go:177] copyRemoteCerts
	I0429 20:27:15.527105    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:27:15.527105    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:20.368198    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:20.368587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:20.368930    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:20.467819    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9406764s)
	I0429 20:27:20.468832    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:27:20.469887    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:27:20.524889    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:27:20.525559    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 20:27:20.578020    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:27:20.578217    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:27:20.634803    6560 provision.go:87] duration metric: took 14.8552541s to configureAuth
	I0429 20:27:20.634874    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:27:20.635533    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:27:20.635638    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:22.779762    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:25.427345    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:25.427345    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:25.428345    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:27:25.562050    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:27:25.562195    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:27:25.562515    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:27:25.562592    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:30.404141    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:30.405195    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:30.412105    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:30.413171    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:30.413700    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.241.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:27:30.578477    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.241.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:27:30.578477    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:32.772580    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:35.465933    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:35.466426    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:35.466509    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:27:37.701893    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:27:37.701981    6560 machine.go:97] duration metric: took 46.9062133s to provisionDockerMachine
	I0429 20:27:37.702052    6560 client.go:171] duration metric: took 1m58.9522849s to LocalClient.Create
	I0429 20:27:37.702194    6560 start.go:167] duration metric: took 1m58.9524269s to libmachine.API.Create "multinode-515700"
	I0429 20:27:37.702194    6560 start.go:293] postStartSetup for "multinode-515700-m02" (driver="hyperv")
	I0429 20:27:37.702194    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:27:37.716028    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:27:37.716028    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:39.888498    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:39.889355    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:39.889707    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:42.576527    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:42.688245    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9721792s)
	I0429 20:27:42.703472    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:27:42.710185    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:27:42.710391    6560 command_runner.go:130] > ID=buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:27:42.710391    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:27:42.710562    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:27:42.710562    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:27:42.710640    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:27:42.712121    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:27:42.712121    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:27:42.725734    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:27:42.745571    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:27:42.798223    6560 start.go:296] duration metric: took 5.0959902s for postStartSetup
	I0429 20:27:42.801718    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:44.985225    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:47.630520    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:27:47.633045    6560 start.go:128] duration metric: took 2m8.8864784s to createHost
	I0429 20:27:47.633167    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:49.823309    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:49.823412    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:49.823495    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:52.524084    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:52.524183    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:52.530451    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:52.531204    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:52.531204    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:27:52.658970    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422472.660345683
	
	I0429 20:27:52.659208    6560 fix.go:216] guest clock: 1714422472.660345683
	I0429 20:27:52.659208    6560 fix.go:229] Guest: 2024-04-29 20:27:52.660345683 +0000 UTC Remote: 2024-04-29 20:27:47.6330452 +0000 UTC m=+346.394263801 (delta=5.027300483s)
	I0429 20:27:52.659208    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:57.461861    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:57.461927    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:57.467747    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:57.468699    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:57.468699    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422472
	I0429 20:27:57.617018    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:27:52 UTC 2024
	
	I0429 20:27:57.617018    6560 fix.go:236] clock set: Mon Apr 29 20:27:52 UTC 2024
	 (err=<nil>)
	I0429 20:27:57.617018    6560 start.go:83] releasing machines lock for "multinode-515700-m02", held for 2m18.8709228s
	I0429 20:27:57.618122    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:59.795247    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:02.475615    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:02.475867    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:02.479078    6560 out.go:177] * Found network options:
	I0429 20:28:02.481434    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.483990    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.486147    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.488513    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 20:28:02.490094    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.492090    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:28:02.492090    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:02.504078    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:28:02.504078    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:07.440744    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.440938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.441026    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.467629    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.629032    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:28:07.630105    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1379759s)
	I0429 20:28:07.630105    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0429 20:28:07.630229    6560 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1259881s)
	W0429 20:28:07.630229    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:28:07.649597    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:28:07.685721    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:28:07.685954    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:28:07.685954    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:07.686227    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:07.722613    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:28:07.736060    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:28:07.771561    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:28:07.793500    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:28:07.809715    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:28:07.846242    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.882404    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:28:07.918280    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.956186    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:28:07.994072    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:28:08.029701    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:28:08.067417    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:28:08.104772    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:28:08.126209    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:28:08.140685    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:28:08.181598    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:08.410362    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:28:08.449856    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:08.466974    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Unit]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:28:08.492900    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:28:08.492900    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:28:08.492900    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Service]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Type=notify
	I0429 20:28:08.492900    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:28:08.492900    6560 command_runner.go:130] > Environment=NO_PROXY=172.17.241.25
	I0429 20:28:08.492900    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:28:08.492900    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:28:08.492900    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:28:08.492900    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:28:08.492900    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:28:08.492900    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:28:08.492900    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:28:08.492900    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:28:08.492900    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:28:08.493891    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:28:08.493891    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:28:08.493891    6560 command_runner.go:130] > Delegate=yes
	I0429 20:28:08.493891    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:28:08.493891    6560 command_runner.go:130] > KillMode=process
	I0429 20:28:08.493891    6560 command_runner.go:130] > [Install]
	I0429 20:28:08.493891    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:28:08.505928    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.548562    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:28:08.606977    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.652185    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.695349    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:28:08.785230    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.816602    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:08.853434    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:28:08.870019    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:28:08.876256    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:28:08.890247    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:28:08.911471    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:28:08.962890    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:28:09.201152    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:28:09.397561    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:28:09.398166    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:28:09.451159    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:09.673084    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:29:10.809648    6560 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 20:29:10.809648    6560 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 20:29:10.809648    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1361028s)
	I0429 20:29:10.827248    6560 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 20:29:10.852173    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 20:29:10.852279    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 20:29:10.852319    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 20:29:10.852739    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854100    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854118    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854142    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854175    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 20:29:10.865335    6560 out.go:177] 
	W0429 20:29:10.865335    6560 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 20:29:10.865335    6560 out.go:239] * 
	W0429 20:29:10.869400    6560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:29:10.876700    6560 out.go:177] 
	
	
	==> Docker <==
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.935874732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:29:50 multinode-515700 dockerd[1331]: time="2024-04-29T20:29:50.936415956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:55 multinode-515700 dockerd[1325]: 2024/04/29 20:42:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:55 multinode-515700 dockerd[1325]: 2024/04/29 20:42:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:26 multinode-515700 dockerd[1325]: 2024/04/29 20:47:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	32c6f043cec2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   e1a58f6d29ec9       busybox-fc5497c4f-dv5v8
	15da1b832ef20       cbb01a7bd410d                                                                                         23 minutes ago      Running             coredns                   0                   73ab97e30d3d0       coredns-7db6d8ff4d-drcsj
	b26e455e6f823       6e38f40d628db                                                                                         23 minutes ago      Running             storage-provisioner       0                   0274116a036cf       storage-provisioner
	11141cf0a01e5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              23 minutes ago      Running             kindnet-cni               0                   5c226cf922db1       kindnet-lt84t
	8d116812e2fa7       a0bf559e280cf                                                                                         23 minutes ago      Running             kube-proxy                0                   c4e88976a7bb5       kube-proxy-6gx5x
	9b9ad8fbed853       c42f13656d0b2                                                                                         23 minutes ago      Running             kube-apiserver            0                   e1040c321d522       kube-apiserver-multinode-515700
	7748681b165fb       259c8277fcbbc                                                                                         23 minutes ago      Running             kube-scheduler            0                   ab47450efbe05       kube-scheduler-multinode-515700
	01f30fac305bc       3861cfcd7c04c                                                                                         23 minutes ago      Running             etcd                      0                   b5202cca492c4       etcd-multinode-515700
	c5de44f1f1066       c7aad43836fa5                                                                                         23 minutes ago      Running             kube-controller-manager   0                   4ae9818227910       kube-controller-manager-multinode-515700
	
	
	==> coredns [15da1b832ef2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 658b75f59357881579d818bea4574a099ffd8bf4e34cb2d6414c381890635887b0895574e607ab48d69c0bc2657640404a00a48de79c5b96ce27f6a68e70a912
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36587 - 14172 "HINFO IN 4725538422205950284.7962128480288568612. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062354244s
	[INFO] 10.244.0.3:46156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244102s
	[INFO] 10.244.0.3:48057 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.210765088s
	[INFO] 10.244.0.3:47676 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15403778s
	[INFO] 10.244.0.3:57534 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.237328274s
	[INFO] 10.244.0.3:38726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000345103s
	[INFO] 10.244.0.3:54844 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.04703092s
	[INFO] 10.244.0.3:51897 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000879808s
	[INFO] 10.244.0.3:57925 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122101s
	[INFO] 10.244.0.3:39997 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012692914s
	[INFO] 10.244.0.3:37301 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000333403s
	[INFO] 10.244.0.3:60294 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172702s
	[INFO] 10.244.0.3:33135 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000250902s
	[INFO] 10.244.0.3:46585 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141701s
	[INFO] 10.244.0.3:41280 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127902s
	[INFO] 10.244.0.3:46602 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000220001s
	[INFO] 10.244.0.3:47802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077001s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	[INFO] 10.244.0.3:45741 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166201s
	[INFO] 10.244.0.3:48683 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	[INFO] 10.244.0.3:47252 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159702s
	
	
	==> describe nodes <==
	Name:               multinode-515700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:48:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:45:36 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:45:36 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:45:36 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:45:36 +0000   Mon, 29 Apr 2024 20:25:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.241.25
	  Hostname:    multinode-515700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc8de88647d944658545c7ae4a702aea
	  System UUID:                68adc21b-67d2-5446-9537-0dea9fd880a0
	  Boot ID:                    9507eca5-5f1f-4862-974e-a61fb27048d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dv5v8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-7db6d8ff4d-drcsj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-multinode-515700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-lt84t                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-multinode-515700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-multinode-515700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-6gx5x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-multinode-515700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23m                node-controller  Node multinode-515700 event: Registered Node multinode-515700 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-515700 status is now: NodeReady
	
	
	Name:               multinode-515700-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T20_46_05_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:46:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:48:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:46:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:46:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:46:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:46:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.240.210
	  Hostname:    multinode-515700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cba11e160ba341e08600b430623543e3
	  System UUID:                c93866d4-f3c2-8b4a-808f-8a49ef3473c2
	  Boot ID:                    eca6382a-2500-4a1e-9ddd-477f0ebe0910
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2t4c2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-svhl6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m44s
	  kube-system                 kube-proxy-ds5fx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m44s (x2 over 2m45s)  kubelet          Node multinode-515700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s (x2 over 2m45s)  kubelet          Node multinode-515700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m44s (x2 over 2m45s)  kubelet          Node multinode-515700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m40s                  node-controller  Node multinode-515700-m03 event: Registered Node multinode-515700-m03 in Controller
	  Normal  NodeReady                2m21s                  kubelet          Node multinode-515700-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 20:24] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.212417] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +31.830340] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.112166] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.613568] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.218400] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.259380] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.863180] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.213718] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.233297] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.301716] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.953055] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.129851] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.793087] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[Apr29 20:25] systemd-fstab-generator[1710]: Ignoring "noauto" option for root device
	[  +0.110579] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.112113] systemd-fstab-generator[2108]: Ignoring "noauto" option for root device
	[  +0.165104] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.220827] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.255309] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.248279] kauditd_printk_skb: 51 callbacks suppressed
	[Apr29 20:26] hrtimer: interrupt took 3466547 ns
	[Apr29 20:29] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [01f30fac305b] <==
	{"level":"info","ts":"2024-04-29T20:43:34.867693Z","caller":"traceutil/trace.go:171","msg":"trace[240417427] linearizableReadLoop","detail":"{readStateIndex:1570; appliedIndex:1569; }","duration":"127.690146ms","start":"2024-04-29T20:43:34.739984Z","end":"2024-04-29T20:43:34.867674Z","steps":["trace[240417427] 'read index received'  (duration: 127.669446ms)","trace[240417427] 'applied index is now lower than readState.Index'  (duration: 20.2µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:43:34.867872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.868347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:43:34.868001Z","caller":"traceutil/trace.go:171","msg":"trace[1472637471] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1337; }","duration":"128.044349ms","start":"2024-04-29T20:43:34.739947Z","end":"2024-04-29T20:43:34.867992Z","steps":["trace[1472637471] 'agreement among raft nodes before linearized reading'  (duration: 127.795647ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:43:34.868426Z","caller":"traceutil/trace.go:171","msg":"trace[764321283] transaction","detail":"{read_only:false; response_revision:1337; number_of_response:1; }","duration":"224.704665ms","start":"2024-04-29T20:43:34.643711Z","end":"2024-04-29T20:43:34.868415Z","steps":["trace[764321283] 'process raft request'  (duration: 223.852758ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:45:06.303388Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1169}
	{"level":"info","ts":"2024-04-29T20:45:06.312061Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1169,"took":"8.045253ms","hash":475365449,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-29T20:45:06.312246Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":475365449,"revision":1169,"compact-revision":929}
	{"level":"info","ts":"2024-04-29T20:45:58.156567Z","caller":"traceutil/trace.go:171","msg":"trace[785089805] transaction","detail":"{read_only:false; response_revision:1453; number_of_response:1; }","duration":"170.534651ms","start":"2024-04-29T20:45:57.986006Z","end":"2024-04-29T20:45:58.156541Z","steps":["trace[785089805] 'process raft request'  (duration: 170.224549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:45:58.532911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.49431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-04-29T20:45:58.533001Z","caller":"traceutil/trace.go:171","msg":"trace[176342803] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1453; }","duration":"147.61771ms","start":"2024-04-29T20:45:58.385363Z","end":"2024-04-29T20:45:58.532981Z","steps":["trace[176342803] 'range keys from in-memory index tree'  (duration: 147.415808ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:45:58.717909Z","caller":"traceutil/trace.go:171","msg":"trace[259978277] transaction","detail":"{read_only:false; response_revision:1454; number_of_response:1; }","duration":"179.638307ms","start":"2024-04-29T20:45:58.538241Z","end":"2024-04-29T20:45:58.71788Z","steps":["trace[259978277] 'process raft request'  (duration: 179.431405ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:45:58.85575Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.622912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:45:58.855965Z","caller":"traceutil/trace.go:171","msg":"trace[1396568622] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1454; }","duration":"115.880014ms","start":"2024-04-29T20:45:58.74007Z","end":"2024-04-29T20:45:58.85595Z","steps":["trace[1396568622] 'range keys from in-memory index tree'  (duration: 115.547212ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:09.855862Z","caller":"traceutil/trace.go:171","msg":"trace[811401261] transaction","detail":"{read_only:false; response_revision:1495; number_of_response:1; }","duration":"102.190223ms","start":"2024-04-29T20:46:09.753656Z","end":"2024-04-29T20:46:09.855846Z","steps":["trace[811401261] 'process raft request'  (duration: 102.095822ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:10.071953Z","caller":"traceutil/trace.go:171","msg":"trace[1996796465] transaction","detail":"{read_only:false; response_revision:1496; number_of_response:1; }","duration":"300.29343ms","start":"2024-04-29T20:46:09.77164Z","end":"2024-04-29T20:46:10.071933Z","steps":["trace[1996796465] 'process raft request'  (duration: 295.855603ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:46:10.072618Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:09.771623Z","time spent":"300.479031ms","remote":"127.0.0.1:50854","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2962,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-515700-m03\" mod_revision:1487 > success:<request_put:<key:\"/registry/minions/multinode-515700-m03\" value_size:2916 >> failure:<request_range:<key:\"/registry/minions/multinode-515700-m03\" > >"}
	{"level":"info","ts":"2024-04-29T20:46:15.569199Z","caller":"traceutil/trace.go:171","msg":"trace[1643861658] transaction","detail":"{read_only:false; response_revision:1503; number_of_response:1; }","duration":"218.350023ms","start":"2024-04-29T20:46:15.350828Z","end":"2024-04-29T20:46:15.569178Z","steps":["trace[1643861658] 'process raft request'  (duration: 218.141522ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:15.960586Z","caller":"traceutil/trace.go:171","msg":"trace[1497086569] linearizableReadLoop","detail":"{readStateIndex:1774; appliedIndex:1773; }","duration":"367.734728ms","start":"2024-04-29T20:46:15.592832Z","end":"2024-04-29T20:46:15.960567Z","steps":["trace[1497086569] 'read index received'  (duration: 332.248313ms)","trace[1497086569] 'applied index is now lower than readState.Index'  (duration: 35.485815ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T20:46:15.960951Z","caller":"traceutil/trace.go:171","msg":"trace[818980090] transaction","detail":"{read_only:false; response_revision:1504; number_of_response:1; }","duration":"594.879604ms","start":"2024-04-29T20:46:15.36606Z","end":"2024-04-29T20:46:15.96094Z","steps":["trace[818980090] 'process raft request'  (duration: 559.784592ms)","trace[818980090] 'compare'  (duration: 34.64431ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:46:15.961608Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:15.366043Z","time spent":"594.957105ms","remote":"127.0.0.1:50958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":569,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" mod_revision:1486 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" value_size:508 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" > >"}
	{"level":"warn","ts":"2024-04-29T20:46:15.962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"369.162137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-515700-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-04-29T20:46:15.96206Z","caller":"traceutil/trace.go:171","msg":"trace[601879282] range","detail":"{range_begin:/registry/minions/multinode-515700-m03; range_end:; response_count:1; response_revision:1504; }","duration":"369.225137ms","start":"2024-04-29T20:46:15.592827Z","end":"2024-04-29T20:46:15.962052Z","steps":["trace[601879282] 'agreement among raft nodes before linearized reading'  (duration: 369.135436ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:46:15.962525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:15.592782Z","time spent":"369.464038ms","remote":"127.0.0.1:50854","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3172,"request content":"key:\"/registry/minions/multinode-515700-m03\" "}
	{"level":"warn","ts":"2024-04-29T20:46:15.962622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.652243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:46:15.962781Z","caller":"traceutil/trace.go:171","msg":"trace[632284179] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1504; }","duration":"221.955444ms","start":"2024-04-29T20:46:15.740814Z","end":"2024-04-29T20:46:15.962769Z","steps":["trace[632284179] 'agreement among raft nodes before linearized reading'  (duration: 221.659043ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:48:49 up 25 min,  0 users,  load average: 0.69, 0.78, 0.53
	Linux multinode-515700 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11141cf0a01e] <==
	I0429 20:47:46.728091       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:47:56.744391       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:47:56.744733       1 main.go:227] handling current node
	I0429 20:47:56.744914       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:47:56.745017       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:48:06.758784       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:48:06.758916       1 main.go:227] handling current node
	I0429 20:48:06.758948       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:48:06.758957       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:48:16.768837       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:48:16.768907       1 main.go:227] handling current node
	I0429 20:48:16.768920       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:48:16.768927       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:48:26.782667       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:48:26.782790       1 main.go:227] handling current node
	I0429 20:48:26.782806       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:48:26.782814       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:48:36.796751       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:48:36.796782       1 main.go:227] handling current node
	I0429 20:48:36.796794       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:48:36.796801       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:48:46.803792       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:48:46.804073       1 main.go:227] handling current node
	I0429 20:48:46.804090       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:48:46.804100       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9b9ad8fbed85] <==
	I0429 20:25:08.456691       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 20:25:09.052862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 20:25:09.062497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 20:25:09.063038       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 20:25:10.434046       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 20:25:10.531926       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 20:25:10.667114       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 20:25:10.682682       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.241.25]
	I0429 20:25:10.685084       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 20:25:10.705095       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 20:25:11.202529       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 20:25:11.660474       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 20:25:11.702512       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 20:25:11.739640       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 20:25:25.195544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 20:25:25.294821       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 20:41:45.603992       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54600: use of closed network connection
	E0429 20:41:46.683622       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54606: use of closed network connection
	E0429 20:41:47.742503       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54616: use of closed network connection
	E0429 20:42:24.359204       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54636: use of closed network connection
	E0429 20:42:34.907983       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54638: use of closed network connection
	I0429 20:46:15.963628       1 trace.go:236] Trace[1378232527]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:bc84c8cc-c1e5-4f4d-8a1c-4ed7b226292a,client:172.17.240.210,api-group:coordination.k8s.io,api-version:v1,name:multinode-515700-m03,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-515700-m03,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (29-Apr-2024 20:46:15.363) (total time: 599ms):
	Trace[1378232527]: ["GuaranteedUpdate etcd3" audit-id:bc84c8cc-c1e5-4f4d-8a1c-4ed7b226292a,key:/leases/kube-node-lease/multinode-515700-m03,type:*coordination.Lease,resource:leases.coordination.k8s.io 599ms (20:46:15.364)
	Trace[1378232527]:  ---"Txn call completed" 598ms (20:46:15.963)]
	Trace[1378232527]: [599.725533ms] [599.725533ms] END
	
	
	==> kube-controller-manager [c5de44f1f106] <==
	I0429 20:25:25.137746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 20:25:25.742477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="536.801912ms"
	I0429 20:25:25.820241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.613668ms"
	I0429 20:25:25.820606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.801µs"
	I0429 20:25:26.647122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.452819ms"
	I0429 20:25:26.673190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.454556ms"
	I0429 20:25:26.673366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.301µs"
	I0429 20:25:35.442523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48µs"
	I0429 20:25:35.504302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.901µs"
	I0429 20:25:37.519404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.21268ms"
	I0429 20:25:37.519516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.698µs"
	I0429 20:25:39.495810       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 20:29:47.937478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.419556ms"
	I0429 20:29:47.961915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.36964ms"
	I0429 20:29:47.962862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.499µs"
	I0429 20:29:52.098445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.730146ms"
	I0429 20:29:52.098921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.902µs"
	I0429 20:46:05.025369       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-515700-m03\" does not exist"
	I0429 20:46:05.038750       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-515700-m03" podCIDRs=["10.244.1.0/24"]
	I0429 20:46:09.749698       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-515700-m03"
	I0429 20:46:28.280618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-515700-m03"
	I0429 20:46:28.324633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.8µs"
	I0429 20:46:28.354027       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.9µs"
	I0429 20:46:31.239793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.942065ms"
	I0429 20:46:31.240386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="306.702µs"
	
	
	==> kube-proxy [8d116812e2fa] <==
	I0429 20:25:27.278575       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:25:27.322396       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.241.25"]
	I0429 20:25:27.381777       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:25:27.381896       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:25:27.381924       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:25:27.389649       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:25:27.392153       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:25:27.392448       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:25:27.396161       1 config.go:192] "Starting service config controller"
	I0429 20:25:27.396372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:25:27.396564       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:25:27.396976       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:25:27.399035       1 config.go:319] "Starting node config controller"
	I0429 20:25:27.399236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:25:27.497521       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:25:27.497518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:25:27.500527       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7748681b165f] <==
	W0429 20:25:09.310708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.311983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.372121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.372287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.389043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:25:09.389975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 20:25:09.402308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.402357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.414781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:25:09.414997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:25:09.463545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:25:09.463684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:25:09.473360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 20:25:09.473524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 20:25:09.538214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 20:25:09.538587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 20:25:09.595918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.596510       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.751697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 20:25:09.752615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 20:25:09.794103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:25:09.794595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 20:25:09.800334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 20:25:09.800494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 20:25:11.092300       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:44:11 multinode-515700 kubelet[2116]: E0429 20:44:11.928923    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:44:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:44:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:44:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:44:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:45:11 multinode-515700 kubelet[2116]: E0429 20:45:11.923458    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:45:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:45:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:45:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:45:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:46:11 multinode-515700 kubelet[2116]: E0429 20:46:11.926896    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:47:11 multinode-515700 kubelet[2116]: E0429 20:47:11.924357    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:48:11 multinode-515700 kubelet[2116]: E0429 20:48:11.922927    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:48:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:48:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:48:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:48:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:48:41.041784    8872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700: (12.3970896s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-515700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (72.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (123.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 node stop m03
E0429 20:49:33.467836   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-515700 node stop m03: (34.8882103s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status: exit status 7 (26.4991952s)

                                                
                                                
-- stdout --
	multinode-515700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-515700-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-515700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:49:38.842063    9348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status --alsologtostderr
E0429 20:50:24.013865   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status --alsologtostderr: exit status 7 (26.7467435s)

                                                
                                                
-- stdout --
	multinode-515700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-515700-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-515700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:50:05.332845    3428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 20:50:05.422567    3428 out.go:291] Setting OutFile to fd 1156 ...
	I0429 20:50:05.423880    3428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:50:05.423880    3428 out.go:304] Setting ErrFile to fd 1136...
	I0429 20:50:05.423880    3428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:50:05.441257    3428 out.go:298] Setting JSON to false
	I0429 20:50:05.441257    3428 mustload.go:65] Loading cluster: multinode-515700
	I0429 20:50:05.442336    3428 notify.go:220] Checking for updates...
	I0429 20:50:05.442958    3428 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:50:05.443116    3428 status.go:255] checking status of multinode-515700 ...
	I0429 20:50:05.443333    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:50:07.668615    3428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:50:07.668615    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:07.668615    3428 status.go:330] multinode-515700 host status = "Running" (err=<nil>)
	I0429 20:50:07.668615    3428 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:50:07.669717    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:50:09.843666    3428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:50:09.844526    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:09.844624    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:50:12.482867    3428 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:50:12.482867    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:12.482867    3428 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:50:12.498176    3428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 20:50:12.498176    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:50:14.682663    3428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:50:14.683231    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:14.683323    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:50:17.493431    3428 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:50:17.493829    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:17.493829    3428 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:50:17.596179    3428 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0979624s)
	I0429 20:50:17.608672    3428 ssh_runner.go:195] Run: systemctl --version
	I0429 20:50:17.631631    3428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:50:17.661274    3428 kubeconfig.go:125] found "multinode-515700" server: "https://172.17.241.25:8443"
	I0429 20:50:17.661814    3428 api_server.go:166] Checking apiserver status ...
	I0429 20:50:17.675735    3428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:50:17.723871    3428 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup
	W0429 20:50:17.755054    3428 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:50:17.770828    3428 ssh_runner.go:195] Run: ls
	I0429 20:50:17.780744    3428 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:50:17.788731    3428 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:50:17.788731    3428 status.go:422] multinode-515700 apiserver status = Running (err=<nil>)
	I0429 20:50:17.788731    3428 status.go:257] multinode-515700 status: &{Name:multinode-515700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 20:50:17.788731    3428 status.go:255] checking status of multinode-515700-m02 ...
	I0429 20:50:17.789377    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:50:20.103324    3428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:50:20.103499    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:20.103499    3428 status.go:330] multinode-515700-m02 host status = "Running" (err=<nil>)
	I0429 20:50:20.103499    3428 host.go:66] Checking if "multinode-515700-m02" exists ...
	I0429 20:50:20.104042    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:50:22.318741    3428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:50:22.318873    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:22.318873    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:50:24.883377    3428 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:50:24.883377    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:24.883377    3428 host.go:66] Checking if "multinode-515700-m02" exists ...
	I0429 20:50:24.898661    3428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 20:50:24.898661    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:50:27.046501    3428 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:50:27.046501    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:27.046692    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:50:29.657803    3428 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:50:29.657803    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:29.658440    3428 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:50:29.748050    3428 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8492139s)
	I0429 20:50:29.763846    3428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:50:29.791432    3428 status.go:257] multinode-515700-m02 status: &{Name:multinode-515700-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 20:50:29.791432    3428 status.go:255] checking status of multinode-515700-m03 ...
	I0429 20:50:29.792391    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:50:31.917964    3428 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 20:50:31.918824    3428 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:50:31.918976    3428 status.go:330] multinode-515700-m03 host status = "Stopped" (err=<nil>)
	I0429 20:50:31.919017    3428 status.go:343] host is not running, skipping remaining checks
	I0429 20:50:31.919017    3428 status.go:257] multinode-515700-m03 status: &{Name:multinode-515700-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-windows-amd64.exe -p multinode-515700 status --alsologtostderr": multinode-515700
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-515700-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-515700-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-515700 status --alsologtostderr": multinode-515700
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-515700-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-515700-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700: (12.4707402s)
helpers_test.go:244: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25: (8.6631034s)
helpers_test.go:252: TestMultiNode/serial/StopNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-515700 -- rollout       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:29 UTC |                     |
	|         | status deployment/busybox            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 -- nslookup  |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 -- nslookup  |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC | 29 Apr 24 20:42 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2              |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC | 29 Apr 24 20:42 UTC |
	|         | busybox-fc5497c4f-dv5v8              |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC |                     |
	|         | busybox-fc5497c4f-dv5v8 -- sh        |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.240.1            |                  |                   |         |                     |                     |
	| node    | add -p multinode-515700 -v 3         | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:43 UTC | 29 Apr 24 20:46 UTC |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | multinode-515700 node stop m03       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:49 UTC | 29 Apr 24 20:49 UTC |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:22:01
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:22:01.431751    6560 out.go:291] Setting OutFile to fd 1000 ...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.432590    6560 out.go:304] Setting ErrFile to fd 1156...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.463325    6560 out.go:298] Setting JSON to false
	I0429 20:22:01.467738    6560 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24060,"bootTime":1714398060,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 20:22:01.467738    6560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 20:22:01.473386    6560 out.go:177] * [multinode-515700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 20:22:01.477900    6560 notify.go:220] Checking for updates...
	I0429 20:22:01.480328    6560 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:22:01.485602    6560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:22:01.488123    6560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 20:22:01.490657    6560 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:22:01.493249    6560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:22:01.496241    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:22:01.497610    6560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:22:06.930154    6560 out.go:177] * Using the hyperv driver based on user configuration
	I0429 20:22:06.933587    6560 start.go:297] selected driver: hyperv
	I0429 20:22:06.933587    6560 start.go:901] validating driver "hyperv" against <nil>
	I0429 20:22:06.933587    6560 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:22:06.986262    6560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 20:22:06.987723    6560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:22:06.988334    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:22:06.988334    6560 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 20:22:06.988334    6560 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 20:22:06.988334    6560 start.go:340] cluster config:
	{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:22:06.988334    6560 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:22:06.992867    6560 out.go:177] * Starting "multinode-515700" primary control-plane node in "multinode-515700" cluster
	I0429 20:22:06.995976    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:22:06.996499    6560 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 20:22:06.996703    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:22:06.996741    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:22:06.996741    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:22:06.996741    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:22:06.996741    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json: {Name:mkdf346f9e30a055d7c79ffb416c8ce539e0c5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:360] acquireMachinesLock for multinode-515700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700"
	I0429 20:22:06.999081    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:22:06.999081    6560 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 20:22:07.006481    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:22:07.006790    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:22:07.006790    6560 client.go:168] LocalClient.Create starting
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:22:09.217702    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:22:09.217822    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:09.217951    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:22:11.056235    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:12.618512    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:16.461966    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:22:17.019827    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: Creating VM...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:20.140355    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:22:20.140483    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:22.004896    6560 main.go:141] libmachine: Creating VHD
	I0429 20:22:22.004896    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:22:25.795387    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DA11902-3EE7-4F99-A00A-752C0686FD99
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:22:25.796445    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:25.796496    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:22:25.796702    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:22:25.814462    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:22:29.034595    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:29.035273    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:29.035337    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -SizeBytes 20000MB
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:31.671427    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-515700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:35.461856    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700 -DynamicMemoryEnabled $false
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:37.723890    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700 -Count 2
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\boot2docker.iso'
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:42.558432    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd'
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:45.265400    6560 main.go:141] libmachine: Starting VM...
	I0429 20:22:45.265400    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:50.732199    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:50.733048    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:50.733149    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:54.308058    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:56.517062    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:59.110985    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:59.111613    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:00.127675    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:02.349860    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:05.987459    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:08.224322    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:10.790333    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:10.791338    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:11.803237    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:14.061252    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:18.855659    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:23:18.855911    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:21.063683    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:23.697335    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:23.697580    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:23.703285    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:23.713965    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:23.713965    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:23:23.854760    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:23:23.854760    6560 buildroot.go:166] provisioning hostname "multinode-515700"
	I0429 20:23:23.854760    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:26.029157    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:26.029995    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:26.030093    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:28.624899    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:28.625217    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:28.625495    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700 && echo "multinode-515700" | sudo tee /etc/hostname
	I0429 20:23:28.799265    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700
	
	I0429 20:23:28.799376    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:30.924333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:33.588985    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:33.589381    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:33.589381    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:23:33.743242    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:23:33.743242    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:23:33.743242    6560 buildroot.go:174] setting up certificates
	I0429 20:23:33.743242    6560 provision.go:84] configureAuth start
	I0429 20:23:33.743939    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:35.885562    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:38.477298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:40.581307    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:43.165623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:43.165853    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:43.165933    6560 provision.go:143] copyHostCerts
	I0429 20:23:43.166093    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:23:43.166093    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:23:43.166093    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:23:43.166722    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:23:43.168141    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:23:43.168305    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:23:43.168305    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:23:43.168887    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:23:43.169614    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:23:43.170245    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:23:43.170340    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:23:43.170731    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:23:43.171712    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700 san=[127.0.0.1 172.17.241.25 localhost minikube multinode-515700]
	I0429 20:23:43.368646    6560 provision.go:177] copyRemoteCerts
	I0429 20:23:43.382882    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:23:43.382882    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:45.539057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:48.109324    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:23:48.217340    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8343588s)
	I0429 20:23:48.217478    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:23:48.218375    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:23:48.267636    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:23:48.267636    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 20:23:48.316493    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:23:48.316784    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:23:48.372851    6560 provision.go:87] duration metric: took 14.6294509s to configureAuth
	I0429 20:23:48.372952    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:23:48.373086    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:23:48.373086    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:50.522765    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:50.522998    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:50.523146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:53.169650    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:53.170462    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:53.170462    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:23:53.302673    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:23:53.302726    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:23:53.302726    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:23:53.302726    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:55.434984    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:58.060160    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:58.061082    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:58.067077    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:58.068199    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:58.068292    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:23:58.226608    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:23:58.227212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:00.358933    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:02.944293    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:02.944373    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:02.950227    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:02.950958    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:02.950958    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:24:05.224184    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:24:05.224184    6560 machine.go:97] duration metric: took 46.3681587s to provisionDockerMachine
	I0429 20:24:05.224184    6560 client.go:171] duration metric: took 1m58.2164577s to LocalClient.Create
	I0429 20:24:05.224184    6560 start.go:167] duration metric: took 1m58.2164577s to libmachine.API.Create "multinode-515700"
	I0429 20:24:05.224184    6560 start.go:293] postStartSetup for "multinode-515700" (driver="hyperv")
	I0429 20:24:05.224184    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:24:05.241199    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:24:05.241199    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:07.393879    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:09.983789    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:09.984033    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:09.984469    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:10.092254    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8510176s)
	I0429 20:24:10.107982    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:24:10.116700    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:24:10.116700    6560 command_runner.go:130] > ID=buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:24:10.116700    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:24:10.116700    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:24:10.116700    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:24:10.117268    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:24:10.118515    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:24:10.118515    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:24:10.132514    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:24:10.152888    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:24:10.201665    6560 start.go:296] duration metric: took 4.9774423s for postStartSetup
	I0429 20:24:10.204966    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:12.345708    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:12.345785    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:12.345855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:14.957675    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:24:14.960758    6560 start.go:128] duration metric: took 2m7.9606641s to createHost
	I0429 20:24:14.962026    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:17.100197    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:17.100281    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:17.100354    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:19.725196    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:19.725860    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:19.725860    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:24:19.867560    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422259.868914581
	
	I0429 20:24:19.867560    6560 fix.go:216] guest clock: 1714422259.868914581
	I0429 20:24:19.867694    6560 fix.go:229] Guest: 2024-04-29 20:24:19.868914581 +0000 UTC Remote: 2024-04-29 20:24:14.9613787 +0000 UTC m=+133.724240401 (delta=4.907535881s)
	I0429 20:24:19.867694    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:22.005967    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:24.588016    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:24.588016    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:24.588016    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422259
	I0429 20:24:24.741766    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:24:19 UTC 2024
	
	I0429 20:24:24.741837    6560 fix.go:236] clock set: Mon Apr 29 20:24:19 UTC 2024
	 (err=<nil>)
	I0429 20:24:24.741837    6560 start.go:83] releasing machines lock for "multinode-515700", held for 2m17.7427319s
	I0429 20:24:24.742129    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:26.884301    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:29.475377    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:29.476046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:29.480912    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:24:29.481639    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:29.493304    6560 ssh_runner.go:195] Run: cat /version.json
	I0429 20:24:29.493304    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:31.702922    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:34.435635    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.436190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.436258    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.480228    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.481073    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.481135    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.531424    6560 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 20:24:34.531720    6560 ssh_runner.go:235] Completed: cat /version.json: (5.0383759s)
	I0429 20:24:34.545943    6560 ssh_runner.go:195] Run: systemctl --version
	I0429 20:24:34.614256    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:24:34.615354    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1343125s)
	I0429 20:24:34.615354    6560 command_runner.go:130] > systemd 252 (252)
	I0429 20:24:34.615354    6560 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 20:24:34.630005    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:24:34.639051    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 20:24:34.639955    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:24:34.653590    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:24:34.683800    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:24:34.683903    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:24:34.683903    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:34.684139    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:34.720958    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:24:34.735137    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:24:34.769077    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:24:34.791121    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:24:34.804751    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:24:34.838781    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.871052    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:24:34.905043    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.940043    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:24:34.975295    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:24:35.009502    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:24:35.044104    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:24:35.078095    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:24:35.099570    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:24:35.114246    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:24:35.146794    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:35.365920    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:24:35.402710    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:35.417050    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Unit]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:24:35.443946    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:24:35.443946    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:24:35.443946    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Service]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Type=notify
	I0429 20:24:35.443946    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:24:35.443946    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:24:35.443946    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:24:35.443946    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:24:35.443946    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:24:35.443946    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:24:35.443946    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:24:35.443946    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:24:35.443946    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:24:35.443946    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:24:35.443946    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:24:35.443946    6560 command_runner.go:130] > Delegate=yes
	I0429 20:24:35.443946    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:24:35.443946    6560 command_runner.go:130] > KillMode=process
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Install]
	I0429 20:24:35.444947    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:24:35.457957    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.500818    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:24:35.548559    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.585869    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.622879    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:24:35.694256    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.721660    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:35.757211    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:24:35.773795    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:24:35.779277    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:24:35.793892    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:24:35.813834    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:24:35.865638    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:24:36.085117    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:24:36.291781    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:24:36.291781    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:24:36.337637    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:36.567033    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:39.106704    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5396504s)
	I0429 20:24:39.121937    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 20:24:39.164421    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.201973    6560 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 20:24:39.432817    6560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 20:24:39.648494    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:39.872471    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 20:24:39.918782    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.959078    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:40.189711    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 20:24:40.314827    6560 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 20:24:40.327765    6560 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 20:24:40.337989    6560 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 20:24:40.338077    6560 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 20:24:40.338077    6560 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Modify: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Change: 2024-04-29 20:24:40.227771386 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] >  Birth: -
	I0429 20:24:40.338228    6560 start.go:562] Will wait 60s for crictl version
	I0429 20:24:40.353543    6560 ssh_runner.go:195] Run: which crictl
	I0429 20:24:40.359551    6560 command_runner.go:130] > /usr/bin/crictl
	I0429 20:24:40.372542    6560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:24:40.422534    6560 command_runner.go:130] > Version:  0.1.0
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeName:  docker
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 20:24:40.422534    6560 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 20:24:40.433531    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.468470    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.477791    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.510922    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.518057    6560 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 20:24:40.518283    6560 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: 172.17.240.1/20
	I0429 20:24:40.538782    6560 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 20:24:40.546082    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:40.569927    6560 kubeadm.go:877] updating cluster {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:24:40.570125    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:24:40.581034    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:40.605162    6560 docker.go:685] Got preloaded images: 
	I0429 20:24:40.605162    6560 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 20:24:40.617894    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:40.637456    6560 command_runner.go:139] > {"Repositories":{}}
	I0429 20:24:40.652557    6560 ssh_runner.go:195] Run: which lz4
	I0429 20:24:40.659728    6560 command_runner.go:130] > /usr/bin/lz4
	I0429 20:24:40.659728    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 20:24:40.676390    6560 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:24:40.682600    6560 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 20:24:43.151463    6560 docker.go:649] duration metric: took 2.4917153s to copy over tarball
	I0429 20:24:43.166991    6560 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:24:51.777678    6560 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6106197s)
	I0429 20:24:51.777678    6560 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:24:51.848689    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:51.869772    6560 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0429 20:24:51.869772    6560 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 20:24:51.923721    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:52.150884    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:55.504316    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3534062s)
	I0429 20:24:55.515091    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:24:55.540357    6560 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 20:24:55.540357    6560 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:24:55.540557    6560 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 20:24:55.540557    6560 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:24:55.540557    6560 kubeadm.go:928] updating node { 172.17.241.25 8443 v1.30.0 docker true true} ...
	I0429 20:24:55.540557    6560 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-515700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.241.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:24:55.550945    6560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 20:24:55.586940    6560 command_runner.go:130] > cgroupfs
	I0429 20:24:55.587354    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:24:55.587354    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:24:55.587354    6560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:24:55.587354    6560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.241.25 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-515700 NodeName:multinode-515700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.241.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.241.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:24:55.587882    6560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.241.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-515700"
	  kubeletExtraArgs:
	    node-ip: 172.17.241.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.241.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:24:55.601173    6560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubeadm
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubectl
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubelet
	I0429 20:24:55.622022    6560 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:24:55.633924    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:24:55.654273    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 20:24:55.692289    6560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:24:55.726319    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:24:55.774801    6560 ssh_runner.go:195] Run: grep 172.17.241.25	control-plane.minikube.internal$ /etc/hosts
	I0429 20:24:55.781653    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.241.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:55.820570    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:56.051044    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:24:56.087660    6560 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700 for IP: 172.17.241.25
	I0429 20:24:56.087753    6560 certs.go:194] generating shared ca certs ...
	I0429 20:24:56.087824    6560 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 20:24:56.089063    6560 certs.go:256] generating profile certs ...
	I0429 20:24:56.089855    6560 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key
	I0429 20:24:56.089855    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt with IP's: []
	I0429 20:24:56.283640    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt ...
	I0429 20:24:56.284633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt: {Name:mk1286f657dae134d1e4806ec4fc1d780c02f0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.285633    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key ...
	I0429 20:24:56.285633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key: {Name:mka98d4501f3f942abed1092b1c97c4a2bbd30cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.286633    6560 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d
	I0429 20:24:56.287300    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.241.25]
	I0429 20:24:56.456862    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d ...
	I0429 20:24:56.456862    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d: {Name:mk09d828589f59d94791e90fc999c9ce1101118e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d ...
	I0429 20:24:56.458476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d: {Name:mk92ebf0409a99e4a3e3430ff86080f164f4bc0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458796    6560 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt
	I0429 20:24:56.473961    6560 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key
	I0429 20:24:56.474965    6560 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key
	I0429 20:24:56.474965    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt with IP's: []
	I0429 20:24:56.680472    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt ...
	I0429 20:24:56.680472    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt: {Name:mkc600562c7738e3eec9de4025428e3048df463a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.682476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key ...
	I0429 20:24:56.682476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key: {Name:mkc9ba6e1afbc9ca05ce8802b568a72bfd19a90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 20:24:56.685482    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 20:24:56.693323    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 20:24:56.701358    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 20:24:56.702409    6560 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 20:24:56.702718    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 20:24:56.702843    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 20:24:56.705315    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:24:56.758912    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 20:24:56.809584    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:24:56.860874    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:24:56.918708    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:24:56.969377    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:24:57.018903    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:24:57.070438    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:24:57.119823    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:24:57.168671    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 20:24:57.216697    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 20:24:57.263605    6560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:24:57.314590    6560 ssh_runner.go:195] Run: openssl version
	I0429 20:24:57.325614    6560 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 20:24:57.340061    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:24:57.374639    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.394971    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.404667    6560 command_runner.go:130] > b5213941
	I0429 20:24:57.419162    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:24:57.454540    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 20:24:57.494441    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.501867    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.502224    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.517134    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.527174    6560 command_runner.go:130] > 51391683
	I0429 20:24:57.544472    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 20:24:57.579789    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 20:24:57.613535    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622605    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622696    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.637764    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.649176    6560 command_runner.go:130] > 3ec20f2e
	I0429 20:24:57.665410    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:24:57.708796    6560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:24:57.716466    6560 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717133    6560 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717510    6560 kubeadm.go:391] StartCluster: {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:24:57.729105    6560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 20:24:57.771112    6560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 20:24:57.792952    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0429 20:24:57.807601    6560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:24:57.837965    6560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 20:24:57.856820    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:156] found existing configuration files:
	
	I0429 20:24:57.872870    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:24:57.892109    6560 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.892549    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.905782    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:24:57.939062    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:24:57.957607    6560 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.957753    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.972479    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:24:58.006849    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.025918    6560 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.025918    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.039054    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.072026    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:24:58.092314    6560 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.092673    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.105776    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:24:58.124274    6560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:24:58.562957    6560 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:24:58.562957    6560 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:25:12.186137    6560 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186137    6560 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186277    6560 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 20:25:12.186320    6560 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:25:12.186516    6560 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.187085    6560 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.187131    6560 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.190071    6560 out.go:204]   - Generating certificates and keys ...
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190667    6560 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.191251    6560 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191251    6560 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 20:25:12.193040    6560 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193086    6560 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193701    6560 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193701    6560 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.198949    6560 out.go:204]   - Booting up control plane ...
	I0429 20:25:12.193843    6560 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199855    6560 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:25:12.200494    6560 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200494    6560 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201207    6560 command_runner.go:130] > [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.202201    6560 command_runner.go:130] > [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 out.go:204]   - Configuring RBAC rules ...
	I0429 20:25:12.202201    6560 command_runner.go:130] > [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.204361    6560 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.206433    6560 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206433    6560 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206983    6560 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 kubeadm.go:309] 
	I0429 20:25:12.207142    6560 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 kubeadm.go:309] 
	I0429 20:25:12.207365    6560 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207404    6560 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207464    6560 kubeadm.go:309] 
	I0429 20:25:12.207514    6560 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0429 20:25:12.207589    6560 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:25:12.207764    6560 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.207807    6560 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.208030    6560 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 kubeadm.go:309] 
	I0429 20:25:12.208230    6560 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208230    6560 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208281    6560 kubeadm.go:309] 
	I0429 20:25:12.208375    6560 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208375    6560 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208442    6560 kubeadm.go:309] 
	I0429 20:25:12.208643    6560 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0429 20:25:12.208733    6560 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:25:12.208874    6560 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.208936    6560 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.209014    6560 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209014    6560 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209735    6560 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209735    6560 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--control-plane 
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--control-plane 
	I0429 20:25:12.210277    6560 kubeadm.go:309] 
	I0429 20:25:12.210538    6560 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] 
	I0429 20:25:12.210726    6560 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210726    6560 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210937    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:25:12.211197    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:25:12.215717    6560 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 20:25:12.234164    6560 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 20:25:12.242817    6560 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: 2024-04-29 20:23:14.801002600 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Change: 2024-04-29 20:23:06.257000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] >  Birth: -
	I0429 20:25:12.242817    6560 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 20:25:12.242817    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 20:25:12.301387    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 20:25:13.060621    6560 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > serviceaccount/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > daemonset.apps/kindnet created
	I0429 20:25:13.060707    6560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-515700 minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=multinode-515700 minikube.k8s.io/primary=true
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.092072    6560 command_runner.go:130] > -16
	I0429 20:25:13.092113    6560 ops.go:34] apiserver oom_adj: -16
	I0429 20:25:13.290753    6560 command_runner.go:130] > node/multinode-515700 labeled
	I0429 20:25:13.292700    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0429 20:25:13.306335    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.426974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:13.819653    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.947766    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.320587    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.442246    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.822864    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.943107    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.309117    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.432718    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.814070    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.933861    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.317878    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.440680    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.819594    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.942387    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.322995    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.435199    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.809136    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.932465    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.308164    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.429047    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.808817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.928476    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.310090    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.432479    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.815590    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.929079    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.321723    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.442512    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.819466    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.933742    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.309314    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.424974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.811819    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.952603    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.316794    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.432125    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.808890    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.925838    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.310021    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.434432    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.819369    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.948876    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.307817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.457947    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.818980    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.932003    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:25.308659    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:25.488149    6560 command_runner.go:130] > NAME      SECRETS   AGE
	I0429 20:25:25.488217    6560 command_runner.go:130] > default   0         1s
	I0429 20:25:25.489686    6560 kubeadm.go:1107] duration metric: took 12.4288824s to wait for elevateKubeSystemPrivileges
	W0429 20:25:25.489686    6560 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:25:25.489686    6560 kubeadm.go:393] duration metric: took 27.7719601s to StartCluster
	I0429 20:25:25.490694    6560 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.490694    6560 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:25.491677    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.493697    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 20:25:25.493697    6560 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:25:25.498680    6560 out.go:177] * Verifying Kubernetes components...
	I0429 20:25:25.493697    6560 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:25:25.494664    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:25.504657    6560 addons.go:69] Setting storage-provisioner=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:69] Setting default-storageclass=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:234] Setting addon storage-provisioner=true in "multinode-515700"
	I0429 20:25:25.504657    6560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-515700"
	I0429 20:25:25.504657    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.520673    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:25:25.944109    6560 command_runner.go:130] > apiVersion: v1
	I0429 20:25:25.944267    6560 command_runner.go:130] > data:
	I0429 20:25:25.944267    6560 command_runner.go:130] >   Corefile: |
	I0429 20:25:25.944367    6560 command_runner.go:130] >     .:53 {
	I0429 20:25:25.944367    6560 command_runner.go:130] >         errors
	I0429 20:25:25.944367    6560 command_runner.go:130] >         health {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            lameduck 5s
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         ready
	I0429 20:25:25.944367    6560 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            pods insecure
	I0429 20:25:25.944367    6560 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0429 20:25:25.944367    6560 command_runner.go:130] >            ttl 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         prometheus :9153
	I0429 20:25:25.944367    6560 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            max_concurrent 1000
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         cache 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loop
	I0429 20:25:25.944367    6560 command_runner.go:130] >         reload
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loadbalance
	I0429 20:25:25.944367    6560 command_runner.go:130] >     }
	I0429 20:25:25.944367    6560 command_runner.go:130] > kind: ConfigMap
	I0429 20:25:25.944367    6560 command_runner.go:130] > metadata:
	I0429 20:25:25.944367    6560 command_runner.go:130] >   creationTimestamp: "2024-04-29T20:25:11Z"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   name: coredns
	I0429 20:25:25.944367    6560 command_runner.go:130] >   namespace: kube-system
	I0429 20:25:25.944367    6560 command_runner.go:130] >   resourceVersion: "265"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   uid: af2c186a-a14a-4671-8545-05c5ef5d4a89
	I0429 20:25:25.949389    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 20:25:26.023682    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:25:26.408680    6560 command_runner.go:130] > configmap/coredns replaced
	I0429 20:25:26.414254    6560 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.417677    6560 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 20:25:26.417677    6560 node_ready.go:35] waiting up to 6m0s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.435291    6560 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.437034    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.438430    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Audit-Id: a2ae57e5-53a3-4342-ad5c-c2149e87ef04
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438430    6560 round_trippers.go:580]     Audit-Id: 2e6b22a8-9874-417c-a6a5-f7b7437121f7
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438909    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.439086    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:26.440203    6560 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.440298    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.440406    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.440406    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.440519    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:26.440519    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.459913    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.459962    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Audit-Id: 9ca07d91-957f-4992-9642-97b01e07dde3
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.459962    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"393","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.918339    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.918339    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918339    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918339    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.918300    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.918498    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918580    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918580    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Audit-Id: 70383541-35df-461a-b4fb-41bd3b56f11d
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Audit-Id: e628428d-1384-4709-a32e-084c9dfec614
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.929164    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"404","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.929400    6560 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-515700" context rescaled to 1 replicas
	I0429 20:25:26.929400    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.426913    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.426913    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.426913    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.426913    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.430795    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:27.430795    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Audit-Id: e4e6b2b1-e008-4f2a-bae4-3596fce97666
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.430996    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.431340    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.789217    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.789348    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.792426    6560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:25:27.791141    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:27.795103    6560 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:27.795205    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:25:27.795205    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.795205    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:27.795924    6560 addons.go:234] Setting addon default-storageclass=true in "multinode-515700"
	I0429 20:25:27.795924    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:27.796802    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.922993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.923088    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.923175    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.923175    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.929435    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:27.929435    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.929545    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.929545    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.929638    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Audit-Id: 8ef77f9f-d18f-4fa7-ab77-85c137602c84
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.930046    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.432611    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.432611    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.432611    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.432611    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.441320    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:28.441862    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Audit-Id: d32cd9f8-494c-4a69-b028-606c7d354657
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.442308    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.442914    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:28.927674    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.927674    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.927674    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.927897    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.933213    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:28.933794    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Audit-Id: 75d40b2c-c2ed-4221-9361-88591791a649
	I0429 20:25:28.934208    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.422724    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.422898    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.422898    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.422975    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.426431    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:29.426876    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Audit-Id: dde47b6c-069b-408d-a5c6-0a2ea7439643
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.427261    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.918308    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.918308    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.918308    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.918407    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.921072    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:29.921072    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.921072    6560 round_trippers.go:580]     Audit-Id: d4643df6-68ad-4c4c-9604-a5a4d019fba1
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.922076    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.057466    6560 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:30.057636    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:25:30.057750    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:30.145026    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:30.424041    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.424310    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.424310    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.424310    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.428606    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.429051    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.429263    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Audit-Id: 2c59a467-8079-41ed-ac1d-f96dd660d343
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.429435    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.931993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.931993    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.931993    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.931993    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.936635    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.936635    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.937644    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Audit-Id: 9214de5b-8221-4c68-b6b9-a92fe7d41fd1
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.938175    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.939066    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:31.423866    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.423866    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.423866    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.423988    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.427054    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.427827    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.427827    6560 round_trippers.go:580]     Audit-Id: 5f66acb8-ef38-4220-83b6-6e87fbec6f58
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.427869    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:31.932664    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.932664    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.932761    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.932761    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.936680    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.936680    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Audit-Id: f9fb721e-ccaf-4e33-ac69-8ed840761191
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.937009    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.312723    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:32.424680    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.424953    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.424953    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.424953    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.428624    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:32.428906    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.428906    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Audit-Id: d3a39f3a-571d-46c0-a442-edf136da8a11
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.429531    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.858444    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:32.926226    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.926317    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.926393    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.926393    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.929204    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:32.929583    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.929583    6560 round_trippers.go:580]     Audit-Id: 55fc987d-65c0-4ac8-95d2-7fa4185e179b
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.929734    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.930327    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.034553    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:33.425759    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.425833    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.425833    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.425833    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.428624    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.429656    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Audit-Id: d581fce7-8906-48d7-8e13-2d1aba9dec04
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.429916    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.430438    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:33.930984    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.931053    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.931053    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.931053    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.933717    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.933717    6560 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.933717    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.933717    6560 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Audit-Id: 680ed792-db71-4b29-abb9-40f7154e8b1e
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.933717    6560 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0429 20:25:33.933717    6560 command_runner.go:130] > pod/storage-provisioner created
	I0429 20:25:33.933717    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.428102    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.428102    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.428102    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.428102    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.431722    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:34.432624    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Audit-Id: 86cc0608-3000-42b0-9ce8-4223e32d60c3
	I0429 20:25:34.432684    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.433082    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.932029    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.932316    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.932316    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.932316    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.936749    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:34.936749    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Audit-Id: 0e63a4db-3dd4-4e74-8b79-c019b6b97e89
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.937415    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:35.024893    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:35.025151    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:35.025317    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:35.170634    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:35.371184    6560 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0429 20:25:35.371418    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 20:25:35.371571    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.371571    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.371571    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.380781    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:35.381213    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Audit-Id: 31f5e265-3d38-4520-88d0-33f47325189c
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Length: 1273
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.381380    6560 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 20:25:35.382106    6560 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.382183    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 20:25:35.382183    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:35.382269    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.390758    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.390758    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.390758    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Audit-Id: 4dbb716e-2d97-4c38-b342-f63e7d38a4d0
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Length: 1220
	I0429 20:25:35.391190    6560 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.395279    6560 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 20:25:35.397530    6560 addons.go:505] duration metric: took 9.9037568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 20:25:35.421733    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.421733    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.421733    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.421733    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.452743    6560 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 20:25:35.452743    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.452743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Audit-Id: 316d0393-7ba5-4629-87cb-7ae54d0ea965
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.454477    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.455068    6560 node_ready.go:49] node "multinode-515700" has status "Ready":"True"
	I0429 20:25:35.455148    6560 node_ready.go:38] duration metric: took 9.0374019s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:35.455148    6560 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:35.455213    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:35.455213    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.455213    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.455213    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.473128    6560 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 20:25:35.473128    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Audit-Id: 81e159c0-b703-47ba-a9f3-82cc907b8705
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.475820    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"431","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0429 20:25:35.481714    6560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:35.482325    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.482379    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.482379    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.482432    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.491093    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.491093    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Audit-Id: a2eb7ca2-d415-4a7c-a1f0-1ac743bd8f82
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.492090    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.493335    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.493335    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.493335    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.493419    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.496084    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:35.496084    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.496084    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Audit-Id: f61c97ad-ee7a-4666-9244-d7d2091b5d09
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.497097    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.497131    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.497332    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.991323    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.991323    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.991323    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.991323    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.995451    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:35.995451    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Audit-Id: faa8a1a4-279f-4dc3-99c8-8c3b9e9ed746
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:35.996592    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.997239    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.997292    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.997292    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.997292    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.999987    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:35.999987    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Audit-Id: 070c7fff-f707-4b9a-9aef-031cedc68a8c
	I0429 20:25:36.000411    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.483004    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.483004    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.483004    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.483004    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.488152    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:36.488152    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.488152    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.488678    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Audit-Id: fb5cc675-b39d-4cb0-ba8c-24140b3d95e8
	I0429 20:25:36.489818    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.490926    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.490926    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.490985    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.490985    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.494654    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:36.494654    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Audit-Id: fe6d880a-4cf8-4b10-8c7f-debde123eafc
	I0429 20:25:36.495423    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.991643    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.991643    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.991643    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.991855    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.996384    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:36.996384    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Audit-Id: 933a6dd5-a0f7-4380-8189-3e378a8a620d
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:36.997332    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.997760    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.997760    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.997760    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.997760    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.000889    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.000889    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.001211    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Audit-Id: 0342e743-45a6-4fd7-97be-55a766946396
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.001274    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.001759    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.495712    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:37.495712    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.495712    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.495712    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.508671    6560 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 20:25:37.508671    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Audit-Id: d30c6154-a41b-4a0d-976f-d19f40e67223
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.508671    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0429 20:25:37.510663    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.510663    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.510663    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.510663    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.513686    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.513686    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Audit-Id: 397b83a5-95f9-4df8-a76b-042ecc96922c
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.514662    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.514662    6560 pod_ready.go:92] pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.514662    6560 pod_ready.go:81] duration metric: took 2.0329329s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-515700
	I0429 20:25:37.514662    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.514662    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.514662    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.517691    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.517691    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Audit-Id: df53f071-06ed-4797-a51b-7d01b84cac86
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.518412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-515700","namespace":"kube-system","uid":"85f2dc9a-17b5-413c-ab83-d3cbe955571e","resourceVersion":"319","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.241.25:2379","kubernetes.io/config.hash":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.mirror":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.seen":"2024-04-29T20:25:11.718525866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0429 20:25:37.519044    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.519044    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.519124    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.519124    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.521788    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.521788    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Audit-Id: ee5fdb3e-9869-4cd7-996a-a25b453aeb6b
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.521944    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.522769    6560 pod_ready.go:92] pod "etcd-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.522844    6560 pod_ready.go:81] duration metric: took 8.1819ms for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.522844    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.523015    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-515700
	I0429 20:25:37.523015    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.523079    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.523079    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.525575    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.525575    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Audit-Id: cd9d851c-f606-48c9-8da3-3d194ab5464f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.526015    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-515700","namespace":"kube-system","uid":"f5a212eb-87a9-476a-981a-9f31731f39e6","resourceVersion":"312","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.241.25:8443","kubernetes.io/config.hash":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.mirror":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.seen":"2024-04-29T20:25:11.718530866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0429 20:25:37.526356    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.526356    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.526356    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.526356    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.535954    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:37.535954    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Audit-Id: 018aa21f-d408-4777-b54c-eb7aa2295a34
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.536470    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.536974    6560 pod_ready.go:92] pod "kube-apiserver-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.537034    6560 pod_ready.go:81] duration metric: took 14.0881ms for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537034    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537183    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-515700
	I0429 20:25:37.537276    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.537297    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.537297    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.539964    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.539964    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Audit-Id: d3232756-fc07-4b33-a3b5-989d2932cec4
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.541274    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-515700","namespace":"kube-system","uid":"2c9ba563-c2af-45b7-bc1e-bf39759a339b","resourceVersion":"315","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.mirror":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.seen":"2024-04-29T20:25:11.718532066Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0429 20:25:37.541935    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.541935    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.541935    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.541935    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.555960    6560 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 20:25:37.555960    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Audit-Id: 2d117219-3b1a-47fe-99a4-7e5aea7e84d3
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.555960    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.555960    6560 pod_ready.go:92] pod "kube-controller-manager-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.555960    6560 pod_ready.go:81] duration metric: took 18.9251ms for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.555960    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.556943    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gx5x
	I0429 20:25:37.556943    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.556943    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.556943    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.559965    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.560477    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.560477    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Audit-Id: 14e6d1be-eac6-4f20-9621-b409c951fae1
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.560781    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6gx5x","generateName":"kube-proxy-","namespace":"kube-system","uid":"886ac698-7e9b-431b-b822-577331b02f41","resourceVersion":"407","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"027f1d05-009f-4199-81e9-45b0a2d3710f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"027f1d05-009f-4199-81e9-45b0a2d3710f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0429 20:25:37.561552    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.561581    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.561581    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.561581    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.567713    6560 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 20:25:37.567713    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Audit-Id: 678df177-6944-4d30-b889-62528c06bab2
	I0429 20:25:37.567713    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.568391    6560 pod_ready.go:92] pod "kube-proxy-6gx5x" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.568391    6560 pod_ready.go:81] duration metric: took 12.4313ms for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.568391    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.701559    6560 request.go:629] Waited for 132.9214ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701779    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701853    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.701853    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.701853    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.705314    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.706129    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.706129    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.706183    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Audit-Id: 4fb010ad-4d68-4aa0-9ba4-f68d04faa9e8
	I0429 20:25:37.706412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-515700","namespace":"kube-system","uid":"096d3e94-25ba-49b3-b329-6fb47fc88f25","resourceVersion":"334","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.mirror":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.seen":"2024-04-29T20:25:11.718533166Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0429 20:25:37.905204    6560 request.go:629] Waited for 197.8802ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.905322    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.905466    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.909057    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.909159    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Audit-Id: a6cecf7e-83ad-4d5f-8cbb-a65ced7e83ce
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.909286    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.909697    6560 pod_ready.go:92] pod "kube-scheduler-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.909697    6560 pod_ready.go:81] duration metric: took 341.3037ms for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.909697    6560 pod_ready.go:38] duration metric: took 2.4545299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:37.909697    6560 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:25:37.923721    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:25:37.956142    6560 command_runner.go:130] > 2047
	I0429 20:25:37.956226    6560 api_server.go:72] duration metric: took 12.462433s to wait for apiserver process to appear ...
	I0429 20:25:37.956226    6560 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:25:37.956330    6560 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:25:37.965150    6560 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:25:37.965332    6560 round_trippers.go:463] GET https://172.17.241.25:8443/version
	I0429 20:25:37.965364    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.965364    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.965364    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.967124    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:37.967124    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Audit-Id: c3b17e5f-8eb5-4422-bcd1-48cea5831311
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.967423    6560 round_trippers.go:580]     Content-Length: 263
	I0429 20:25:37.967423    6560 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 20:25:37.967530    6560 api_server.go:141] control plane version: v1.30.0
	I0429 20:25:37.967530    6560 api_server.go:131] duration metric: took 11.2306ms to wait for apiserver health ...
	I0429 20:25:37.967629    6560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:25:38.109818    6560 request.go:629] Waited for 142.1878ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110201    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110256    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.110275    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.110275    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.118070    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.118070    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Audit-Id: 557b3073-d14e-4919-8133-995d5b042d22
	I0429 20:25:38.119823    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.123197    6560 system_pods.go:59] 8 kube-system pods found
	I0429 20:25:38.123197    6560 system_pods.go:61] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.123197    6560 system_pods.go:74] duration metric: took 155.566ms to wait for pod list to return data ...
	I0429 20:25:38.123197    6560 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:25:38.295950    6560 request.go:629] Waited for 172.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.296300    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.296300    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.300424    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:38.300424    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Content-Length: 261
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Audit-Id: 7466bf5b-fa07-4a6b-bc06-274738fc9259
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.300674    6560 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"13c4332f-9236-4f04-9e46-f5a98bc3d731","resourceVersion":"343","creationTimestamp":"2024-04-29T20:25:24Z"}}]}
	I0429 20:25:38.300674    6560 default_sa.go:45] found service account: "default"
	I0429 20:25:38.300674    6560 default_sa.go:55] duration metric: took 177.4758ms for default service account to be created ...
	I0429 20:25:38.300674    6560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:25:38.498686    6560 request.go:629] Waited for 197.291ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.498782    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.499005    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.499005    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.499005    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.506756    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.507387    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Audit-Id: ffc5efdb-4263-4450-8ff2-c1bb3f979300
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.507485    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.507503    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.508809    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.512231    6560 system_pods.go:86] 8 kube-system pods found
	I0429 20:25:38.512305    6560 system_pods.go:89] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.512305    6560 system_pods.go:89] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.512451    6560 system_pods.go:89] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.512451    6560 system_pods.go:126] duration metric: took 211.7756ms to wait for k8s-apps to be running ...
	I0429 20:25:38.512451    6560 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:25:38.526027    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:25:38.555837    6560 system_svc.go:56] duration metric: took 43.3852ms WaitForService to wait for kubelet
	I0429 20:25:38.555837    6560 kubeadm.go:576] duration metric: took 13.0620394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:25:38.556007    6560 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:25:38.701455    6560 request.go:629] Waited for 145.1917ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701896    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701917    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.701917    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.702032    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.709221    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.709221    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Audit-Id: 9241b2a0-c483-4bfb-8a19-8f5c9b610b53
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.709221    6560 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0429 20:25:38.710061    6560 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:25:38.710061    6560 node_conditions.go:123] node cpu capacity is 2
	I0429 20:25:38.710061    6560 node_conditions.go:105] duration metric: took 154.0529ms to run NodePressure ...
	I0429 20:25:38.710061    6560 start.go:240] waiting for startup goroutines ...
	I0429 20:25:38.710061    6560 start.go:245] waiting for cluster config update ...
	I0429 20:25:38.710061    6560 start.go:254] writing updated cluster config ...
	I0429 20:25:38.717493    6560 out.go:177] 
	I0429 20:25:38.721129    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.735840    6560 out.go:177] * Starting "multinode-515700-m02" worker node in "multinode-515700" cluster
	I0429 20:25:38.738518    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:25:38.738518    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:25:38.738983    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:25:38.739240    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:25:38.739481    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.745029    6560 start.go:360] acquireMachinesLock for multinode-515700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:25:38.745029    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700-m02"
	I0429 20:25:38.745029    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 20:25:38.745575    6560 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 20:25:38.748852    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:25:38.748852    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:25:38.748852    6560 client.go:168] LocalClient.Create starting
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:40.746212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:25:42.605453    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:47.992432    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:47.992702    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:47.996014    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:25:48.551162    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: Creating VM...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:51.874174    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:25:51.874221    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:53.736899    6560 main.go:141] libmachine: Creating VHD
	I0429 20:25:53.737514    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D65FFD0C-285E-44D0-8723-21544BDDE71A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:25:57.529054    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:00.734035    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -SizeBytes 20000MB
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:03.314283    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-515700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700-m02 -DynamicMemoryEnabled $false
	I0429 20:26:09.480100    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700-m02 -Count 2
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:11.716979    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\boot2docker.iso'
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:14.377298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd'
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:17.090909    6560 main.go:141] libmachine: Starting VM...
	I0429 20:26:17.090909    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700-m02
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:22.527096    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:26.113296    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:28.339433    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:30.953587    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:30.953628    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:31.955478    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:34.197688    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:34.197831    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:34.197901    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:37.817016    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:42.682666    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:42.683603    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:43.685897    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:48.604877    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:48.604915    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:48.604999    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:50.794876    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:50.795093    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:50.795407    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:26:50.795407    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:52.992195    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:52.992243    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:52.992331    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:55.630552    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:55.641728    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:26:55.642758    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:26:55.769182    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:26:55.769182    6560 buildroot.go:166] provisioning hostname "multinode-515700-m02"
	I0429 20:26:55.769333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:57.942857    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:57.943721    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:57.943789    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:00.610012    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:00.610498    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:00.617342    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:00.618022    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:00.618022    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700-m02 && echo "multinode-515700-m02" | sudo tee /etc/hostname
	I0429 20:27:00.774430    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700-m02
	
	I0429 20:27:00.775391    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:02.970796    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:02.971352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:02.971577    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:05.640782    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:05.640782    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:05.640782    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:27:05.779330    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:27:05.779330    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:27:05.779435    6560 buildroot.go:174] setting up certificates
	I0429 20:27:05.779435    6560 provision.go:84] configureAuth start
	I0429 20:27:05.779531    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:07.939785    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:10.607752    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:10.608236    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:10.608319    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:15.428095    6560 provision.go:143] copyHostCerts
	I0429 20:27:15.429066    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:27:15.429066    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:27:15.429066    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:27:15.429626    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:27:15.430936    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:27:15.431366    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:27:15.431366    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:27:15.431875    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:27:15.432822    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:27:15.433064    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:27:15.433064    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:27:15.433498    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:27:15.434807    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700-m02 san=[127.0.0.1 172.17.253.145 localhost minikube multinode-515700-m02]
	I0429 20:27:15.511954    6560 provision.go:177] copyRemoteCerts
	I0429 20:27:15.527105    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:27:15.527105    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:20.368198    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:20.368587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:20.368930    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:20.467819    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9406764s)
	I0429 20:27:20.468832    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:27:20.469887    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:27:20.524889    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:27:20.525559    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 20:27:20.578020    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:27:20.578217    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:27:20.634803    6560 provision.go:87] duration metric: took 14.8552541s to configureAuth
	I0429 20:27:20.634874    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:27:20.635533    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:27:20.635638    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:22.779762    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:25.427345    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:25.427345    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:25.428345    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:27:25.562050    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:27:25.562195    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:27:25.562515    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:27:25.562592    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:30.404141    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:30.405195    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:30.412105    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:30.413171    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:30.413700    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.241.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:27:30.578477    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.241.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:27:30.578477    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:32.772580    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:35.465933    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:35.466426    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:35.466509    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:27:37.701893    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:27:37.701981    6560 machine.go:97] duration metric: took 46.9062133s to provisionDockerMachine
	I0429 20:27:37.702052    6560 client.go:171] duration metric: took 1m58.9522849s to LocalClient.Create
	I0429 20:27:37.702194    6560 start.go:167] duration metric: took 1m58.9524269s to libmachine.API.Create "multinode-515700"
	I0429 20:27:37.702194    6560 start.go:293] postStartSetup for "multinode-515700-m02" (driver="hyperv")
	I0429 20:27:37.702194    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:27:37.716028    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:27:37.716028    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:39.888498    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:39.889355    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:39.889707    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:42.576527    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:42.688245    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9721792s)
	I0429 20:27:42.703472    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:27:42.710185    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:27:42.710391    6560 command_runner.go:130] > ID=buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:27:42.710391    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:27:42.710562    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:27:42.710562    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:27:42.710640    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:27:42.712121    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:27:42.712121    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:27:42.725734    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:27:42.745571    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:27:42.798223    6560 start.go:296] duration metric: took 5.0959902s for postStartSetup
	I0429 20:27:42.801718    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:44.985225    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:47.630520    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:27:47.633045    6560 start.go:128] duration metric: took 2m8.8864784s to createHost
	I0429 20:27:47.633167    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:49.823309    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:49.823412    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:49.823495    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:52.524084    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:52.524183    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:52.530451    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:52.531204    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:52.531204    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:27:52.658970    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422472.660345683
	
	I0429 20:27:52.659208    6560 fix.go:216] guest clock: 1714422472.660345683
	I0429 20:27:52.659208    6560 fix.go:229] Guest: 2024-04-29 20:27:52.660345683 +0000 UTC Remote: 2024-04-29 20:27:47.6330452 +0000 UTC m=+346.394263801 (delta=5.027300483s)
	I0429 20:27:52.659208    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:57.461861    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:57.461927    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:57.467747    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:57.468699    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:57.468699    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422472
	I0429 20:27:57.617018    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:27:52 UTC 2024
	
	I0429 20:27:57.617018    6560 fix.go:236] clock set: Mon Apr 29 20:27:52 UTC 2024
	 (err=<nil>)
	I0429 20:27:57.617018    6560 start.go:83] releasing machines lock for "multinode-515700-m02", held for 2m18.8709228s
	I0429 20:27:57.618122    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:59.795247    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:02.475615    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:02.475867    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:02.479078    6560 out.go:177] * Found network options:
	I0429 20:28:02.481434    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.483990    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.486147    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.488513    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 20:28:02.490094    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.492090    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:28:02.492090    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:02.504078    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:28:02.504078    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:07.440744    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.440938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.441026    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.467629    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.629032    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:28:07.630105    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1379759s)
	I0429 20:28:07.630105    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0429 20:28:07.630229    6560 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1259881s)
	W0429 20:28:07.630229    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:28:07.649597    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:28:07.685721    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:28:07.685954    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:28:07.685954    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:07.686227    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:07.722613    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:28:07.736060    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:28:07.771561    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:28:07.793500    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:28:07.809715    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:28:07.846242    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.882404    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:28:07.918280    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.956186    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:28:07.994072    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:28:08.029701    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:28:08.067417    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:28:08.104772    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:28:08.126209    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:28:08.140685    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:28:08.181598    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:08.410362    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:28:08.449856    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:08.466974    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Unit]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:28:08.492900    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:28:08.492900    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:28:08.492900    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Service]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Type=notify
	I0429 20:28:08.492900    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:28:08.492900    6560 command_runner.go:130] > Environment=NO_PROXY=172.17.241.25
	I0429 20:28:08.492900    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:28:08.492900    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:28:08.492900    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:28:08.492900    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:28:08.492900    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:28:08.492900    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:28:08.492900    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:28:08.492900    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:28:08.492900    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:28:08.493891    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:28:08.493891    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:28:08.493891    6560 command_runner.go:130] > Delegate=yes
	I0429 20:28:08.493891    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:28:08.493891    6560 command_runner.go:130] > KillMode=process
	I0429 20:28:08.493891    6560 command_runner.go:130] > [Install]
	I0429 20:28:08.493891    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:28:08.505928    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.548562    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:28:08.606977    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.652185    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.695349    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:28:08.785230    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.816602    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:08.853434    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:28:08.870019    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:28:08.876256    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:28:08.890247    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:28:08.911471    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:28:08.962890    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:28:09.201152    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:28:09.397561    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:28:09.398166    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:28:09.451159    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:09.673084    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:29:10.809648    6560 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 20:29:10.809648    6560 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 20:29:10.809648    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1361028s)
	I0429 20:29:10.827248    6560 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 20:29:10.852173    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 20:29:10.852279    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 20:29:10.852319    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 20:29:10.852739    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854100    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854118    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854142    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854175    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 20:29:10.865335    6560 out.go:177] 
	W0429 20:29:10.865335    6560 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 20:29:10.865335    6560 out.go:239] * 
	W0429 20:29:10.869400    6560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:29:10.876700    6560 out.go:177] 
	
	
	==> Docker <==
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:08 multinode-515700 dockerd[1325]: 2024/04/29 20:42:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:55 multinode-515700 dockerd[1325]: 2024/04/29 20:42:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:55 multinode-515700 dockerd[1325]: 2024/04/29 20:42:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:26 multinode-515700 dockerd[1325]: 2024/04/29 20:47:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	32c6f043cec2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago      Running             busybox                   0                   e1a58f6d29ec9       busybox-fc5497c4f-dv5v8
	15da1b832ef20       cbb01a7bd410d                                                                                         25 minutes ago      Running             coredns                   0                   73ab97e30d3d0       coredns-7db6d8ff4d-drcsj
	b26e455e6f823       6e38f40d628db                                                                                         25 minutes ago      Running             storage-provisioner       0                   0274116a036cf       storage-provisioner
	11141cf0a01e5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago      Running             kindnet-cni               0                   5c226cf922db1       kindnet-lt84t
	8d116812e2fa7       a0bf559e280cf                                                                                         25 minutes ago      Running             kube-proxy                0                   c4e88976a7bb5       kube-proxy-6gx5x
	9b9ad8fbed853       c42f13656d0b2                                                                                         25 minutes ago      Running             kube-apiserver            0                   e1040c321d522       kube-apiserver-multinode-515700
	7748681b165fb       259c8277fcbbc                                                                                         25 minutes ago      Running             kube-scheduler            0                   ab47450efbe05       kube-scheduler-multinode-515700
	01f30fac305bc       3861cfcd7c04c                                                                                         25 minutes ago      Running             etcd                      0                   b5202cca492c4       etcd-multinode-515700
	c5de44f1f1066       c7aad43836fa5                                                                                         25 minutes ago      Running             kube-controller-manager   0                   4ae9818227910       kube-controller-manager-multinode-515700
	
	
	==> coredns [15da1b832ef2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 658b75f59357881579d818bea4574a099ffd8bf4e34cb2d6414c381890635887b0895574e607ab48d69c0bc2657640404a00a48de79c5b96ce27f6a68e70a912
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36587 - 14172 "HINFO IN 4725538422205950284.7962128480288568612. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062354244s
	[INFO] 10.244.0.3:46156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244102s
	[INFO] 10.244.0.3:48057 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.210765088s
	[INFO] 10.244.0.3:47676 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15403778s
	[INFO] 10.244.0.3:57534 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.237328274s
	[INFO] 10.244.0.3:38726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000345103s
	[INFO] 10.244.0.3:54844 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.04703092s
	[INFO] 10.244.0.3:51897 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000879808s
	[INFO] 10.244.0.3:57925 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122101s
	[INFO] 10.244.0.3:39997 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012692914s
	[INFO] 10.244.0.3:37301 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000333403s
	[INFO] 10.244.0.3:60294 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172702s
	[INFO] 10.244.0.3:33135 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000250902s
	[INFO] 10.244.0.3:46585 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141701s
	[INFO] 10.244.0.3:41280 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127902s
	[INFO] 10.244.0.3:46602 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000220001s
	[INFO] 10.244.0.3:47802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077001s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	[INFO] 10.244.0.3:45741 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166201s
	[INFO] 10.244.0.3:48683 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	[INFO] 10.244.0.3:47252 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159702s
	
	
	==> describe nodes <==
	Name:               multinode-515700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:50:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:50:41 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:50:41 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:50:41 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:50:41 +0000   Mon, 29 Apr 2024 20:25:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.241.25
	  Hostname:    multinode-515700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc8de88647d944658545c7ae4a702aea
	  System UUID:                68adc21b-67d2-5446-9537-0dea9fd880a0
	  Boot ID:                    9507eca5-5f1f-4862-974e-a61fb27048d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dv5v8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-drcsj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	  kube-system                 etcd-multinode-515700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25m
	  kube-system                 kindnet-lt84t                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-apiserver-multinode-515700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-controller-manager-multinode-515700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-6gx5x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-scheduler-multinode-515700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25m                kube-proxy       
	  Normal  Starting                 25m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 25m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m                kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m                kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m                kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25m                node-controller  Node multinode-515700 event: Registered Node multinode-515700 in Controller
	  Normal  NodeReady                25m                kubelet          Node multinode-515700 status is now: NodeReady
	
	
	Name:               multinode-515700-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T20_46_05_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:46:05 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:49:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:49:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:49:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:49:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:49:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.240.210
	  Hostname:    multinode-515700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cba11e160ba341e08600b430623543e3
	  System UUID:                c93866d4-f3c2-8b4a-808f-8a49ef3473c2
	  Boot ID:                    eca6382a-2500-4a1e-9ddd-477f0ebe0910
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2t4c2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-svhl6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m47s
	  kube-system                 kube-proxy-ds5fx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m47s (x2 over 4m48s)  kubelet          Node multinode-515700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s (x2 over 4m48s)  kubelet          Node multinode-515700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s (x2 over 4m48s)  kubelet          Node multinode-515700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m43s                  node-controller  Node multinode-515700-m03 event: Registered Node multinode-515700-m03 in Controller
	  Normal  NodeReady                4m24s                  kubelet          Node multinode-515700-m03 status is now: NodeReady
	  Normal  NodeNotReady             63s                    node-controller  Node multinode-515700-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 20:24] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.212417] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +31.830340] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.112166] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.613568] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.218400] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.259380] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.863180] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.213718] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.233297] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.301716] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.953055] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.129851] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.793087] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[Apr29 20:25] systemd-fstab-generator[1710]: Ignoring "noauto" option for root device
	[  +0.110579] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.112113] systemd-fstab-generator[2108]: Ignoring "noauto" option for root device
	[  +0.165104] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.220827] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.255309] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.248279] kauditd_printk_skb: 51 callbacks suppressed
	[Apr29 20:26] hrtimer: interrupt took 3466547 ns
	[Apr29 20:29] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [01f30fac305b] <==
	{"level":"info","ts":"2024-04-29T20:45:58.717909Z","caller":"traceutil/trace.go:171","msg":"trace[259978277] transaction","detail":"{read_only:false; response_revision:1454; number_of_response:1; }","duration":"179.638307ms","start":"2024-04-29T20:45:58.538241Z","end":"2024-04-29T20:45:58.71788Z","steps":["trace[259978277] 'process raft request'  (duration: 179.431405ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:45:58.85575Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.622912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:45:58.855965Z","caller":"traceutil/trace.go:171","msg":"trace[1396568622] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1454; }","duration":"115.880014ms","start":"2024-04-29T20:45:58.74007Z","end":"2024-04-29T20:45:58.85595Z","steps":["trace[1396568622] 'range keys from in-memory index tree'  (duration: 115.547212ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:09.855862Z","caller":"traceutil/trace.go:171","msg":"trace[811401261] transaction","detail":"{read_only:false; response_revision:1495; number_of_response:1; }","duration":"102.190223ms","start":"2024-04-29T20:46:09.753656Z","end":"2024-04-29T20:46:09.855846Z","steps":["trace[811401261] 'process raft request'  (duration: 102.095822ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:10.071953Z","caller":"traceutil/trace.go:171","msg":"trace[1996796465] transaction","detail":"{read_only:false; response_revision:1496; number_of_response:1; }","duration":"300.29343ms","start":"2024-04-29T20:46:09.77164Z","end":"2024-04-29T20:46:10.071933Z","steps":["trace[1996796465] 'process raft request'  (duration: 295.855603ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:46:10.072618Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:09.771623Z","time spent":"300.479031ms","remote":"127.0.0.1:50854","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2962,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-515700-m03\" mod_revision:1487 > success:<request_put:<key:\"/registry/minions/multinode-515700-m03\" value_size:2916 >> failure:<request_range:<key:\"/registry/minions/multinode-515700-m03\" > >"}
	{"level":"info","ts":"2024-04-29T20:46:15.569199Z","caller":"traceutil/trace.go:171","msg":"trace[1643861658] transaction","detail":"{read_only:false; response_revision:1503; number_of_response:1; }","duration":"218.350023ms","start":"2024-04-29T20:46:15.350828Z","end":"2024-04-29T20:46:15.569178Z","steps":["trace[1643861658] 'process raft request'  (duration: 218.141522ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:15.960586Z","caller":"traceutil/trace.go:171","msg":"trace[1497086569] linearizableReadLoop","detail":"{readStateIndex:1774; appliedIndex:1773; }","duration":"367.734728ms","start":"2024-04-29T20:46:15.592832Z","end":"2024-04-29T20:46:15.960567Z","steps":["trace[1497086569] 'read index received'  (duration: 332.248313ms)","trace[1497086569] 'applied index is now lower than readState.Index'  (duration: 35.485815ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T20:46:15.960951Z","caller":"traceutil/trace.go:171","msg":"trace[818980090] transaction","detail":"{read_only:false; response_revision:1504; number_of_response:1; }","duration":"594.879604ms","start":"2024-04-29T20:46:15.36606Z","end":"2024-04-29T20:46:15.96094Z","steps":["trace[818980090] 'process raft request'  (duration: 559.784592ms)","trace[818980090] 'compare'  (duration: 34.64431ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:46:15.961608Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:15.366043Z","time spent":"594.957105ms","remote":"127.0.0.1:50958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":569,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" mod_revision:1486 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" value_size:508 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" > >"}
	{"level":"warn","ts":"2024-04-29T20:46:15.962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"369.162137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-515700-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-04-29T20:46:15.96206Z","caller":"traceutil/trace.go:171","msg":"trace[601879282] range","detail":"{range_begin:/registry/minions/multinode-515700-m03; range_end:; response_count:1; response_revision:1504; }","duration":"369.225137ms","start":"2024-04-29T20:46:15.592827Z","end":"2024-04-29T20:46:15.962052Z","steps":["trace[601879282] 'agreement among raft nodes before linearized reading'  (duration: 369.135436ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:46:15.962525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:15.592782Z","time spent":"369.464038ms","remote":"127.0.0.1:50854","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3172,"request content":"key:\"/registry/minions/multinode-515700-m03\" "}
	{"level":"warn","ts":"2024-04-29T20:46:15.962622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.652243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:46:15.962781Z","caller":"traceutil/trace.go:171","msg":"trace[632284179] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1504; }","duration":"221.955444ms","start":"2024-04-29T20:46:15.740814Z","end":"2024-04-29T20:46:15.962769Z","steps":["trace[632284179] 'agreement among raft nodes before linearized reading'  (duration: 221.659043ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:49:34.961477Z","caller":"traceutil/trace.go:171","msg":"trace[502856506] linearizableReadLoop","detail":"{readStateIndex:2019; appliedIndex:2018; }","duration":"247.093192ms","start":"2024-04-29T20:49:34.714363Z","end":"2024-04-29T20:49:34.961457Z","steps":["trace[502856506] 'read index received'  (duration: 246.857491ms)","trace[502856506] 'applied index is now lower than readState.Index'  (duration: 235.101µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:49:34.961633Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.382193ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-29T20:49:34.961717Z","caller":"traceutil/trace.go:171","msg":"trace[601185574] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; response_count:0; response_revision:1707; }","duration":"247.481994ms","start":"2024-04-29T20:49:34.714192Z","end":"2024-04-29T20:49:34.961674Z","steps":["trace[601185574] 'agreement among raft nodes before linearized reading'  (duration: 247.359693ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:49:34.962068Z","caller":"traceutil/trace.go:171","msg":"trace[1359928624] transaction","detail":"{read_only:false; response_revision:1707; number_of_response:1; }","duration":"335.041251ms","start":"2024-04-29T20:49:34.627013Z","end":"2024-04-29T20:49:34.962054Z","steps":["trace[1359928624] 'process raft request'  (duration: 334.263847ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:49:34.962372Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:49:34.627001Z","time spent":"335.313352ms","remote":"127.0.0.1:50852","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1705 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-29T20:49:36.278626Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.337569ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14852747513224610764 > lease_revoke:<id:4e1f8f2b8851c396>","response":"size:28"}
	{"level":"info","ts":"2024-04-29T20:49:37.084787Z","caller":"traceutil/trace.go:171","msg":"trace[1339822422] transaction","detail":"{read_only:false; response_revision:1708; number_of_response:1; }","duration":"112.564787ms","start":"2024-04-29T20:49:36.9722Z","end":"2024-04-29T20:49:37.084765Z","steps":["trace[1339822422] 'process raft request'  (duration: 112.352586ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:50:06.320544Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1410}
	{"level":"info","ts":"2024-04-29T20:50:06.329963Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1410,"took":"8.848946ms","hash":1297927457,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1785856,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-29T20:50:06.330194Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1297927457,"revision":1410,"compact-revision":1169}
	
	
	==> kernel <==
	 20:50:52 up 27 min,  0 users,  load average: 0.18, 0.56, 0.47
	Linux multinode-515700 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11141cf0a01e] <==
	I0429 20:49:46.882447       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:49:56.898550       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:49:56.898598       1 main.go:227] handling current node
	I0429 20:49:56.898613       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:49:56.898620       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:50:06.906618       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:50:06.906814       1 main.go:227] handling current node
	I0429 20:50:06.906831       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:50:06.906840       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:50:16.923745       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:50:16.923828       1 main.go:227] handling current node
	I0429 20:50:16.923842       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:50:16.923849       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:50:26.937513       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:50:26.937637       1 main.go:227] handling current node
	I0429 20:50:26.937653       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:50:26.937661       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:50:36.947885       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:50:36.947943       1 main.go:227] handling current node
	I0429 20:50:36.947961       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:50:36.947969       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:50:46.957568       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:50:46.957692       1 main.go:227] handling current node
	I0429 20:50:46.957709       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:50:46.957718       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9b9ad8fbed85] <==
	I0429 20:25:08.456691       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 20:25:09.052862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 20:25:09.062497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 20:25:09.063038       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 20:25:10.434046       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 20:25:10.531926       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 20:25:10.667114       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 20:25:10.682682       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.241.25]
	I0429 20:25:10.685084       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 20:25:10.705095       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 20:25:11.202529       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 20:25:11.660474       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 20:25:11.702512       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 20:25:11.739640       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 20:25:25.195544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 20:25:25.294821       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 20:41:45.603992       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54600: use of closed network connection
	E0429 20:41:46.683622       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54606: use of closed network connection
	E0429 20:41:47.742503       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54616: use of closed network connection
	E0429 20:42:24.359204       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54636: use of closed network connection
	E0429 20:42:34.907983       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54638: use of closed network connection
	I0429 20:46:15.963628       1 trace.go:236] Trace[1378232527]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:bc84c8cc-c1e5-4f4d-8a1c-4ed7b226292a,client:172.17.240.210,api-group:coordination.k8s.io,api-version:v1,name:multinode-515700-m03,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-515700-m03,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (29-Apr-2024 20:46:15.363) (total time: 599ms):
	Trace[1378232527]: ["GuaranteedUpdate etcd3" audit-id:bc84c8cc-c1e5-4f4d-8a1c-4ed7b226292a,key:/leases/kube-node-lease/multinode-515700-m03,type:*coordination.Lease,resource:leases.coordination.k8s.io 599ms (20:46:15.364)
	Trace[1378232527]:  ---"Txn call completed" 598ms (20:46:15.963)]
	Trace[1378232527]: [599.725533ms] [599.725533ms] END
	
	
	==> kube-controller-manager [c5de44f1f106] <==
	I0429 20:25:25.820241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.613668ms"
	I0429 20:25:25.820606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.801µs"
	I0429 20:25:26.647122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.452819ms"
	I0429 20:25:26.673190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.454556ms"
	I0429 20:25:26.673366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.301µs"
	I0429 20:25:35.442523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48µs"
	I0429 20:25:35.504302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.901µs"
	I0429 20:25:37.519404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.21268ms"
	I0429 20:25:37.519516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.698µs"
	I0429 20:25:39.495810       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 20:29:47.937478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.419556ms"
	I0429 20:29:47.961915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.36964ms"
	I0429 20:29:47.962862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.499µs"
	I0429 20:29:52.098445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.730146ms"
	I0429 20:29:52.098921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.902µs"
	I0429 20:46:05.025369       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-515700-m03\" does not exist"
	I0429 20:46:05.038750       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-515700-m03" podCIDRs=["10.244.1.0/24"]
	I0429 20:46:09.749698       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-515700-m03"
	I0429 20:46:28.280618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-515700-m03"
	I0429 20:46:28.324633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.8µs"
	I0429 20:46:28.354027       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.9µs"
	I0429 20:46:31.239793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.942065ms"
	I0429 20:46:31.240386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="306.702µs"
	I0429 20:49:49.871652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.942339ms"
	I0429 20:49:49.876024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.5µs"
	
	
	==> kube-proxy [8d116812e2fa] <==
	I0429 20:25:27.278575       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:25:27.322396       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.241.25"]
	I0429 20:25:27.381777       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:25:27.381896       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:25:27.381924       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:25:27.389649       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:25:27.392153       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:25:27.392448       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:25:27.396161       1 config.go:192] "Starting service config controller"
	I0429 20:25:27.396372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:25:27.396564       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:25:27.396976       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:25:27.399035       1 config.go:319] "Starting node config controller"
	I0429 20:25:27.399236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:25:27.497521       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:25:27.497518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:25:27.500527       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7748681b165f] <==
	W0429 20:25:09.310708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.311983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.372121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.372287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.389043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:25:09.389975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 20:25:09.402308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.402357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.414781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:25:09.414997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:25:09.463545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:25:09.463684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:25:09.473360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 20:25:09.473524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 20:25:09.538214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 20:25:09.538587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 20:25:09.595918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.596510       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.751697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 20:25:09.752615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 20:25:09.794103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:25:09.794595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 20:25:09.800334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 20:25:09.800494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 20:25:11.092300       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:46:11 multinode-515700 kubelet[2116]: E0429 20:46:11.926896    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:46:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:47:11 multinode-515700 kubelet[2116]: E0429 20:47:11.924357    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:47:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:48:11 multinode-515700 kubelet[2116]: E0429 20:48:11.922927    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:48:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:48:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:48:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:48:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:49:11 multinode-515700 kubelet[2116]: E0429 20:49:11.923081    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:49:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:49:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:49:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:49:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:50:11 multinode-515700 kubelet[2116]: E0429 20:50:11.923459    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:50:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:50:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:50:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:50:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:50:44.551610   12952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700: (12.2878622s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-515700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopNode (123.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (147s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 node start m03 -v=7 --alsologtostderr: exit status 1 (54.301025s)

                                                
                                                
-- stdout --
	* Starting "multinode-515700-m03" worker node in "multinode-515700" cluster
	* Restarting existing hyperv VM for "multinode-515700-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:51:07.078500   13240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 20:51:07.172540   13240 out.go:291] Setting OutFile to fd 1000 ...
	I0429 20:51:07.190379   13240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:51:07.190508   13240 out.go:304] Setting ErrFile to fd 1724...
	I0429 20:51:07.190508   13240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:51:07.213392   13240 mustload.go:65] Loading cluster: multinode-515700
	I0429 20:51:07.214326   13240 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:51:07.215372   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:09.332908   13240 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 20:51:09.333050   13240 main.go:141] libmachine: [stderr =====>] : 
	W0429 20:51:09.333188   13240 host.go:58] "multinode-515700-m03" host status: Stopped
	I0429 20:51:09.337394   13240 out.go:177] * Starting "multinode-515700-m03" worker node in "multinode-515700" cluster
	I0429 20:51:09.339597   13240 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:51:09.339597   13240 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 20:51:09.340182   13240 cache.go:56] Caching tarball of preloaded images
	I0429 20:51:09.340430   13240 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:51:09.340430   13240 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:51:09.341165   13240 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:51:09.343205   13240 start.go:360] acquireMachinesLock for multinode-515700-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:51:09.343205   13240 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700-m03"
	I0429 20:51:09.343205   13240 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:51:09.344097   13240 fix.go:54] fixHost starting: m03
	I0429 20:51:09.344784   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:11.529552   13240 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 20:51:11.529552   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:11.530030   13240 fix.go:112] recreateIfNeeded on multinode-515700-m03: state=Stopped err=<nil>
	W0429 20:51:11.530030   13240 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:51:11.535007   13240 out.go:177] * Restarting existing hyperv VM for "multinode-515700-m03" ...
	I0429 20:51:11.537036   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700-m03
	I0429 20:51:14.664672   13240 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:51:14.665370   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:14.665370   13240 main.go:141] libmachine: Waiting for host to start...
	I0429 20:51:14.665370   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:16.908625   13240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:51:16.908625   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:16.908785   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:51:19.481994   13240 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:51:19.481994   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:20.496698   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:22.720573   13240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:51:22.720573   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:22.721426   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:51:25.306755   13240 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:51:25.307819   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:26.317914   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:28.529931   13240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:51:28.530931   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:28.530962   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:51:31.118922   13240 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:51:31.119766   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:32.133964   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:34.385745   13240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:51:34.385745   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:34.386852   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:51:36.985584   13240 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:51:36.986038   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:37.993936   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:40.211750   13240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:51:40.211750   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:40.212376   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:51:42.887208   13240 main.go:141] libmachine: [stdout =====>] : 172.17.244.104
	
	I0429 20:51:42.887208   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:42.890866   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:45.065622   13240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:51:45.066261   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:45.066261   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:51:47.709574   13240 main.go:141] libmachine: [stdout =====>] : 172.17.244.104
	
	I0429 20:51:47.709574   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:47.710826   13240 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:51:47.713648   13240 machine.go:94] provisionDockerMachine start ...
	I0429 20:51:47.713769   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:49.860436   13240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:51:49.860436   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:49.861341   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:51:52.527192   13240 main.go:141] libmachine: [stdout =====>] : 172.17.244.104
	
	I0429 20:51:52.527371   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:52.533292   13240 main.go:141] libmachine: Using SSH client type: native
	I0429 20:51:52.533985   13240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.244.104 22 <nil> <nil>}
	I0429 20:51:52.533985   13240 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:51:52.673893   13240 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:51:52.673977   13240 buildroot.go:166] provisioning hostname "multinode-515700-m03"
	I0429 20:51:52.674042   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:54.826423   13240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:51:54.826574   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:54.826574   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 20:51:57.429466   13240 main.go:141] libmachine: [stdout =====>] : 172.17.244.104
	
	I0429 20:51:57.429848   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:57.436271   13240 main.go:141] libmachine: Using SSH client type: native
	I0429 20:51:57.436860   13240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.244.104 22 <nil> <nil>}
	I0429 20:51:57.436860   13240 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700-m03 && echo "multinode-515700-m03" | sudo tee /etc/hostname
	I0429 20:51:57.595156   13240 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700-m03
	
	I0429 20:51:57.595264   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
	I0429 20:51:59.766691   13240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:51:59.766691   13240 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:51:59.767079   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:284: W0429 20:51:07.078500   13240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 20:51:07.172540   13240 out.go:291] Setting OutFile to fd 1000 ...
I0429 20:51:07.190379   13240 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 20:51:07.190508   13240 out.go:304] Setting ErrFile to fd 1724...
I0429 20:51:07.190508   13240 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 20:51:07.213392   13240 mustload.go:65] Loading cluster: multinode-515700
I0429 20:51:07.214326   13240 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 20:51:07.215372   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:09.332908   13240 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0429 20:51:09.333050   13240 main.go:141] libmachine: [stderr =====>] : 
W0429 20:51:09.333188   13240 host.go:58] "multinode-515700-m03" host status: Stopped
I0429 20:51:09.337394   13240 out.go:177] * Starting "multinode-515700-m03" worker node in "multinode-515700" cluster
I0429 20:51:09.339597   13240 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0429 20:51:09.339597   13240 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0429 20:51:09.340182   13240 cache.go:56] Caching tarball of preloaded images
I0429 20:51:09.340430   13240 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0429 20:51:09.340430   13240 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0429 20:51:09.341165   13240 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
I0429 20:51:09.343205   13240 start.go:360] acquireMachinesLock for multinode-515700-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0429 20:51:09.343205   13240 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700-m03"
I0429 20:51:09.343205   13240 start.go:96] Skipping create...Using existing machine configuration
I0429 20:51:09.344097   13240 fix.go:54] fixHost starting: m03
I0429 20:51:09.344784   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:11.529552   13240 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0429 20:51:11.529552   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:11.530030   13240 fix.go:112] recreateIfNeeded on multinode-515700-m03: state=Stopped err=<nil>
W0429 20:51:11.530030   13240 fix.go:138] unexpected machine state, will restart: <nil>
I0429 20:51:11.535007   13240 out.go:177] * Restarting existing hyperv VM for "multinode-515700-m03" ...
I0429 20:51:11.537036   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700-m03
I0429 20:51:14.664672   13240 main.go:141] libmachine: [stdout =====>] : 
I0429 20:51:14.665370   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:14.665370   13240 main.go:141] libmachine: Waiting for host to start...
I0429 20:51:14.665370   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:16.908625   13240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 20:51:16.908625   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:16.908785   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
I0429 20:51:19.481994   13240 main.go:141] libmachine: [stdout =====>] : 
I0429 20:51:19.481994   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:20.496698   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:22.720573   13240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 20:51:22.720573   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:22.721426   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
I0429 20:51:25.306755   13240 main.go:141] libmachine: [stdout =====>] : 
I0429 20:51:25.307819   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:26.317914   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:28.529931   13240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 20:51:28.530931   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:28.530962   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
I0429 20:51:31.118922   13240 main.go:141] libmachine: [stdout =====>] : 
I0429 20:51:31.119766   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:32.133964   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:34.385745   13240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 20:51:34.385745   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:34.386852   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
I0429 20:51:36.985584   13240 main.go:141] libmachine: [stdout =====>] : 
I0429 20:51:36.986038   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:37.993936   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:40.211750   13240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 20:51:40.211750   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:40.212376   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
I0429 20:51:42.887208   13240 main.go:141] libmachine: [stdout =====>] : 172.17.244.104

                                                
                                                
I0429 20:51:42.887208   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:42.890866   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:45.065622   13240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 20:51:45.066261   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:45.066261   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
I0429 20:51:47.709574   13240 main.go:141] libmachine: [stdout =====>] : 172.17.244.104

                                                
                                                
I0429 20:51:47.709574   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:47.710826   13240 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
I0429 20:51:47.713648   13240 machine.go:94] provisionDockerMachine start ...
I0429 20:51:47.713769   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:49.860436   13240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 20:51:49.860436   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:49.861341   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
I0429 20:51:52.527192   13240 main.go:141] libmachine: [stdout =====>] : 172.17.244.104

                                                
                                                
I0429 20:51:52.527371   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:52.533292   13240 main.go:141] libmachine: Using SSH client type: native
I0429 20:51:52.533985   13240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.244.104 22 <nil> <nil>}
I0429 20:51:52.533985   13240 main.go:141] libmachine: About to run SSH command:
hostname
I0429 20:51:52.673893   13240 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0429 20:51:52.673977   13240 buildroot.go:166] provisioning hostname "multinode-515700-m03"
I0429 20:51:52.674042   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:54.826423   13240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 20:51:54.826574   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:54.826574   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
I0429 20:51:57.429466   13240 main.go:141] libmachine: [stdout =====>] : 172.17.244.104

                                                
                                                
I0429 20:51:57.429848   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:57.436271   13240 main.go:141] libmachine: Using SSH client type: native
I0429 20:51:57.436860   13240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.244.104 22 <nil> <nil>}
I0429 20:51:57.436860   13240 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-515700-m03 && echo "multinode-515700-m03" | sudo tee /etc/hostname
I0429 20:51:57.595156   13240 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700-m03

                                                
                                                
I0429 20:51:57.595264   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m03 ).state
I0429 20:51:59.766691   13240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 20:51:59.766691   13240 main.go:141] libmachine: [stderr =====>] : 
I0429 20:51:59.767079   13240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m03 ).networkadapters[0]).ipaddresses[0]
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-515700 node start m03 -v=7 --alsologtostderr": exit status 1
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr: context deadline exceeded (67.5µs)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:294: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-515700 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700
E0429 20:53:10.231885   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-515700 -n multinode-515700: (12.4832741s)
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-515700 logs -n 25: (8.6015784s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:39 UTC | 29 Apr 24 20:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:40 UTC | 29 Apr 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2 -- nslookup  |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:41 UTC | 29 Apr 24 20:41 UTC |
	|         | busybox-fc5497c4f-dv5v8 -- nslookup  |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- get pods -o   | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC | 29 Apr 24 20:42 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC |                     |
	|         | busybox-fc5497c4f-2t4c2              |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC | 29 Apr 24 20:42 UTC |
	|         | busybox-fc5497c4f-dv5v8              |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-515700 -- exec          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:42 UTC |                     |
	|         | busybox-fc5497c4f-dv5v8 -- sh        |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.240.1            |                  |                   |         |                     |                     |
	| node    | add -p multinode-515700 -v 3         | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:43 UTC | 29 Apr 24 20:46 UTC |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | multinode-515700 node stop m03       | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:49 UTC | 29 Apr 24 20:49 UTC |
	| node    | multinode-515700 node start          | multinode-515700 | minikube6\jenkins | v1.33.0 | 29 Apr 24 20:51 UTC |                     |
	|         | m03 -v=7 --alsologtostderr           |                  |                   |         |                     |                     |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:22:01
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:22:01.431751    6560 out.go:291] Setting OutFile to fd 1000 ...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.432590    6560 out.go:304] Setting ErrFile to fd 1156...
	I0429 20:22:01.432590    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:22:01.463325    6560 out.go:298] Setting JSON to false
	I0429 20:22:01.467738    6560 start.go:129] hostinfo: {"hostname":"minikube6","uptime":24060,"bootTime":1714398060,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 20:22:01.467738    6560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 20:22:01.473386    6560 out.go:177] * [multinode-515700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 20:22:01.477900    6560 notify.go:220] Checking for updates...
	I0429 20:22:01.480328    6560 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:22:01.485602    6560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:22:01.488123    6560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 20:22:01.490657    6560 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:22:01.493249    6560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:22:01.496241    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:22:01.497610    6560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:22:06.930154    6560 out.go:177] * Using the hyperv driver based on user configuration
	I0429 20:22:06.933587    6560 start.go:297] selected driver: hyperv
	I0429 20:22:06.933587    6560 start.go:901] validating driver "hyperv" against <nil>
	I0429 20:22:06.933587    6560 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:22:06.986262    6560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 20:22:06.987723    6560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:22:06.988334    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:22:06.988334    6560 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 20:22:06.988334    6560 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 20:22:06.988334    6560 start.go:340] cluster config:
	{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:22:06.988334    6560 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:22:06.992867    6560 out.go:177] * Starting "multinode-515700" primary control-plane node in "multinode-515700" cluster
	I0429 20:22:06.995976    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:22:06.996499    6560 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 20:22:06.996703    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:22:06.996741    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:22:06.996741    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:22:06.996741    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:22:06.996741    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json: {Name:mkdf346f9e30a055d7c79ffb416c8ce539e0c5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:360] acquireMachinesLock for multinode-515700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:22:06.998017    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700"
	I0429 20:22:06.999081    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:22:06.999081    6560 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 20:22:07.006481    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:22:07.006790    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:22:07.006790    6560 client.go:168] LocalClient.Create starting
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007069    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:22:07.007759    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:22:09.217702    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:22:09.217822    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:09.217951    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:22:11.056235    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:11.057046    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:12.617678    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:12.618512    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:16.458551    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:16.461966    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:22:17.019827    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: Creating VM...
	I0429 20:22:17.139112    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:22:20.139974    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:20.140355    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:22:20.140483    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:22:22.004347    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:22.004896    6560 main.go:141] libmachine: Creating VHD
	I0429 20:22:22.004896    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:22:25.795387    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DA11902-3EE7-4F99-A00A-752C0686FD99
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:22:25.796445    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:25.796496    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:22:25.796702    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:22:25.814462    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:22:29.034595    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:29.035273    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:29.035337    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd' -SizeBytes 20000MB
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:31.670928    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:31.671427    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-515700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:22:35.461751    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:35.461856    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700 -DynamicMemoryEnabled $false
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:37.723671    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:37.723890    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700 -Count 2
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:39.924306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\boot2docker.iso'
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:42.557989    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:42.558432    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\disk.vhd'
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:45.265129    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:45.265400    6560 main.go:141] libmachine: Starting VM...
	I0429 20:22:45.265400    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:48.486826    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:22:48.486826    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:50.732199    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:50.733048    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:50.733149    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:53.294800    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:54.308058    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:22:56.517062    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:22:56.517138    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:22:59.110985    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:22:59.111613    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:00.127675    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:02.349553    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:02.349860    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:04.973013    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:05.987459    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:08.223558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:08.224322    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:10.790333    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:23:10.791338    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:11.803237    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:14.061111    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:14.061252    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:16.718106    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:18.855377    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:18.855659    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:23:18.855911    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:21.063683    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:21.063761    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:23.697335    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:23.697580    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:23.703285    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:23.713965    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:23.713965    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:23:23.854760    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:23:23.854760    6560 buildroot.go:166] provisioning hostname "multinode-515700"
	I0429 20:23:23.854760    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:26.029157    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:26.029995    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:26.030093    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:28.619083    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:28.624899    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:28.625217    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:28.625495    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700 && echo "multinode-515700" | sudo tee /etc/hostname
	I0429 20:23:28.799265    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700
	
	I0429 20:23:28.799376    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:30.923838    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:30.924333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:33.581684    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:33.588985    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:33.589381    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:33.589381    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:23:33.743242    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:23:33.743242    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:23:33.743242    6560 buildroot.go:174] setting up certificates
	I0429 20:23:33.743242    6560 provision.go:84] configureAuth start
	I0429 20:23:33.743939    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:35.885562    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:35.886662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:38.476558    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:38.477298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:40.581307    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:40.582231    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:43.165623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:43.165853    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:43.165933    6560 provision.go:143] copyHostCerts
	I0429 20:23:43.166093    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:23:43.166093    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:23:43.166093    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:23:43.166722    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:23:43.168141    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:23:43.168305    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:23:43.168305    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:23:43.168887    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:23:43.169614    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:23:43.170245    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:23:43.170340    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:23:43.170731    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:23:43.171712    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700 san=[127.0.0.1 172.17.241.25 localhost minikube multinode-515700]
	I0429 20:23:43.368646    6560 provision.go:177] copyRemoteCerts
	I0429 20:23:43.382882    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:23:43.382882    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:45.539057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:45.539114    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:48.109324    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:48.109324    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:23:48.217340    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8343588s)
	I0429 20:23:48.217478    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:23:48.218375    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:23:48.267636    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:23:48.267636    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 20:23:48.316493    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:23:48.316784    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:23:48.372851    6560 provision.go:87] duration metric: took 14.6294509s to configureAuth
	I0429 20:23:48.372952    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:23:48.373086    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:23:48.373086    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:50.522765    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:50.522998    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:50.523146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:53.163730    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:53.169650    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:53.170462    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:53.170462    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:23:53.302673    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:23:53.302726    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:23:53.302726    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:23:53.302726    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:23:55.434984    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:55.435042    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:23:58.060160    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:23:58.061082    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:23:58.067077    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:23:58.068199    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:23:58.068292    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:23:58.226608    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:23:58.227212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:00.358757    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:00.358933    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:02.944293    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:02.944373    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:02.950227    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:02.950958    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:02.950958    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:24:05.224184    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:24:05.224184    6560 machine.go:97] duration metric: took 46.3681587s to provisionDockerMachine
	I0429 20:24:05.224184    6560 client.go:171] duration metric: took 1m58.2164577s to LocalClient.Create
	I0429 20:24:05.224184    6560 start.go:167] duration metric: took 1m58.2164577s to libmachine.API.Create "multinode-515700"
	I0429 20:24:05.224184    6560 start.go:293] postStartSetup for "multinode-515700" (driver="hyperv")
	I0429 20:24:05.224184    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:24:05.241199    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:24:05.241199    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:07.393879    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:07.393938    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:09.983789    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:09.984033    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:09.984469    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:10.092254    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8510176s)
	I0429 20:24:10.107982    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:24:10.116700    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:24:10.116700    6560 command_runner.go:130] > ID=buildroot
	I0429 20:24:10.116700    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:24:10.116700    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:24:10.116700    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:24:10.116700    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:24:10.117268    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:24:10.118515    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:24:10.118515    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:24:10.132514    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:24:10.152888    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:24:10.201665    6560 start.go:296] duration metric: took 4.9774423s for postStartSetup
	I0429 20:24:10.204966    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:12.345708    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:12.345785    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:12.345855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:14.957426    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:14.957675    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:24:14.960758    6560 start.go:128] duration metric: took 2m7.9606641s to createHost
	I0429 20:24:14.962026    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:17.100197    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:17.100281    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:17.100354    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:19.707054    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:19.725196    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:19.725860    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:19.725860    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:24:19.867560    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422259.868914581
	
	I0429 20:24:19.867560    6560 fix.go:216] guest clock: 1714422259.868914581
	I0429 20:24:19.867694    6560 fix.go:229] Guest: 2024-04-29 20:24:19.868914581 +0000 UTC Remote: 2024-04-29 20:24:14.9613787 +0000 UTC m=+133.724240401 (delta=4.907535881s)
	I0429 20:24:19.867694    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:22.005967    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:22.006448    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:24.578292    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:24.588016    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:24:24.588016    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.241.25 22 <nil> <nil>}
	I0429 20:24:24.588016    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422259
	I0429 20:24:24.741766    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:24:19 UTC 2024
	
	I0429 20:24:24.741837    6560 fix.go:236] clock set: Mon Apr 29 20:24:19 UTC 2024
	 (err=<nil>)
	I0429 20:24:24.741837    6560 start.go:83] releasing machines lock for "multinode-515700", held for 2m17.7427319s
	I0429 20:24:24.742129    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:26.884030    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:26.884301    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:29.475377    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:29.476046    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:29.480912    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:24:29.481639    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:29.493304    6560 ssh_runner.go:195] Run: cat /version.json
	I0429 20:24:29.493304    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:24:31.702922    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.703144    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:31.704045    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:24:34.435635    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.436190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.436258    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.480228    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:24:34.481073    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:24:34.481135    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:24:34.531424    6560 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 20:24:34.531720    6560 ssh_runner.go:235] Completed: cat /version.json: (5.0383759s)
	I0429 20:24:34.545943    6560 ssh_runner.go:195] Run: systemctl --version
	I0429 20:24:34.614256    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:24:34.615354    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1343125s)
	I0429 20:24:34.615354    6560 command_runner.go:130] > systemd 252 (252)
	I0429 20:24:34.615354    6560 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 20:24:34.630005    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:24:34.639051    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 20:24:34.639955    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:24:34.653590    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:24:34.683800    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:24:34.683903    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:24:34.683903    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:34.684139    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:34.720958    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:24:34.735137    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:24:34.769077    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:24:34.791121    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:24:34.804751    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:24:34.838781    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.871052    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:24:34.905043    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:24:34.940043    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:24:34.975295    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:24:35.009502    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:24:35.044104    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:24:35.078095    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:24:35.099570    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:24:35.114246    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:24:35.146794    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:35.365920    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:24:35.402710    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:24:35.417050    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Unit]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:24:35.443946    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:24:35.443946    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:24:35.443946    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:24:35.443946    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Service]
	I0429 20:24:35.443946    6560 command_runner.go:130] > Type=notify
	I0429 20:24:35.443946    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:24:35.443946    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:24:35.443946    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:24:35.443946    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:24:35.443946    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:24:35.443946    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:24:35.443946    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:24:35.443946    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:24:35.443946    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:24:35.443946    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:24:35.443946    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:24:35.443946    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:24:35.443946    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:24:35.443946    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:24:35.443946    6560 command_runner.go:130] > Delegate=yes
	I0429 20:24:35.443946    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:24:35.443946    6560 command_runner.go:130] > KillMode=process
	I0429 20:24:35.443946    6560 command_runner.go:130] > [Install]
	I0429 20:24:35.444947    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:24:35.457957    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.500818    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:24:35.548559    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:24:35.585869    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.622879    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:24:35.694256    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:24:35.721660    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:24:35.757211    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:24:35.773795    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:24:35.779277    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:24:35.793892    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:24:35.813834    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:24:35.865638    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:24:36.085117    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:24:36.291781    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:24:36.291781    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:24:36.337637    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:36.567033    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:39.106704    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5396504s)
	I0429 20:24:39.121937    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 20:24:39.164421    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.201973    6560 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 20:24:39.432817    6560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 20:24:39.648494    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:39.872471    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 20:24:39.918782    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 20:24:39.959078    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:40.189711    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 20:24:40.314827    6560 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 20:24:40.327765    6560 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 20:24:40.337989    6560 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 20:24:40.338077    6560 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 20:24:40.338077    6560 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 20:24:40.338145    6560 command_runner.go:130] > Access: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Modify: 2024-04-29 20:24:40.223771338 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] > Change: 2024-04-29 20:24:40.227771386 +0000
	I0429 20:24:40.338145    6560 command_runner.go:130] >  Birth: -
	I0429 20:24:40.338228    6560 start.go:562] Will wait 60s for crictl version
	I0429 20:24:40.353543    6560 ssh_runner.go:195] Run: which crictl
	I0429 20:24:40.359551    6560 command_runner.go:130] > /usr/bin/crictl
	I0429 20:24:40.372542    6560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:24:40.422534    6560 command_runner.go:130] > Version:  0.1.0
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeName:  docker
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 20:24:40.422534    6560 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 20:24:40.422534    6560 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 20:24:40.433531    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.468470    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.477791    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 20:24:40.510922    6560 command_runner.go:130] > 26.0.2
	I0429 20:24:40.518057    6560 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 20:24:40.518283    6560 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 20:24:40.522952    6560 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e0:c4:39 Flags:up|broadcast|multicast|running}
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: fe80::d7b1:cba0:b50e:5170/64
	I0429 20:24:40.527080    6560 ip.go:210] interface addr: 172.17.240.1/20
	I0429 20:24:40.538782    6560 ssh_runner.go:195] Run: grep 172.17.240.1	host.minikube.internal$ /etc/hosts
	I0429 20:24:40.546082    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:40.569927    6560 kubeadm.go:877] updating cluster {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:24:40.570125    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:24:40.581034    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:40.605162    6560 docker.go:685] Got preloaded images: 
	I0429 20:24:40.605162    6560 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 20:24:40.617894    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:40.637456    6560 command_runner.go:139] > {"Repositories":{}}
	I0429 20:24:40.652557    6560 ssh_runner.go:195] Run: which lz4
	I0429 20:24:40.659728    6560 command_runner.go:130] > /usr/bin/lz4
	I0429 20:24:40.659728    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 20:24:40.676390    6560 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:24:40.682600    6560 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:24:40.683537    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 20:24:43.151463    6560 docker.go:649] duration metric: took 2.4917153s to copy over tarball
	I0429 20:24:43.166991    6560 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:24:51.777678    6560 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6106197s)
	I0429 20:24:51.777678    6560 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:24:51.848689    6560 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 20:24:51.869772    6560 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0429 20:24:51.869772    6560 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 20:24:51.923721    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:52.150884    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:24:55.504316    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3534062s)
	I0429 20:24:55.515091    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 20:24:55.540192    6560 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:24:55.540357    6560 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 20:24:55.540357    6560 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:24:55.540557    6560 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 20:24:55.540557    6560 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:24:55.540557    6560 kubeadm.go:928] updating node { 172.17.241.25 8443 v1.30.0 docker true true} ...
	I0429 20:24:55.540557    6560 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-515700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.241.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:24:55.550945    6560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 20:24:55.586940    6560 command_runner.go:130] > cgroupfs
	I0429 20:24:55.587354    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:24:55.587354    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:24:55.587354    6560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:24:55.587354    6560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.241.25 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-515700 NodeName:multinode-515700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.241.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.241.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:24:55.587882    6560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.241.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-515700"
	  kubeletExtraArgs:
	    node-ip: 172.17.241.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.241.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:24:55.601173    6560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubeadm
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubectl
	I0429 20:24:55.622022    6560 command_runner.go:130] > kubelet
	I0429 20:24:55.622022    6560 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:24:55.633924    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:24:55.654273    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 20:24:55.692289    6560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:24:55.726319    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:24:55.774801    6560 ssh_runner.go:195] Run: grep 172.17.241.25	control-plane.minikube.internal$ /etc/hosts
	I0429 20:24:55.781653    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.241.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:24:55.820570    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:24:56.051044    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:24:56.087660    6560 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700 for IP: 172.17.241.25
	I0429 20:24:56.087753    6560 certs.go:194] generating shared ca certs ...
	I0429 20:24:56.087824    6560 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 20:24:56.088315    6560 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 20:24:56.089063    6560 certs.go:256] generating profile certs ...
	I0429 20:24:56.089855    6560 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key
	I0429 20:24:56.089855    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt with IP's: []
	I0429 20:24:56.283640    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt ...
	I0429 20:24:56.284633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.crt: {Name:mk1286f657dae134d1e4806ec4fc1d780c02f0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.285633    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key ...
	I0429 20:24:56.285633    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\client.key: {Name:mka98d4501f3f942abed1092b1c97c4a2bbd30cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.286633    6560 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d
	I0429 20:24:56.287300    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.241.25]
	I0429 20:24:56.456862    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d ...
	I0429 20:24:56.456862    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d: {Name:mk09d828589f59d94791e90fc999c9ce1101118e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d ...
	I0429 20:24:56.458476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d: {Name:mk92ebf0409a99e4a3e3430ff86080f164f4bc0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.458796    6560 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt
	I0429 20:24:56.473961    6560 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key.e4b5899d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key
	I0429 20:24:56.474965    6560 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key
	I0429 20:24:56.474965    6560 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt with IP's: []
	I0429 20:24:56.680472    6560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt ...
	I0429 20:24:56.680472    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt: {Name:mkc600562c7738e3eec9de4025428e3048df463a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.682476    6560 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key ...
	I0429 20:24:56.682476    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key: {Name:mkc9ba6e1afbc9ca05ce8802b568a72bfd19a90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 20:24:56.683479    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 20:24:56.684576    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 20:24:56.685482    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 20:24:56.693323    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 20:24:56.701358    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 20:24:56.702409    6560 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 20:24:56.702718    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 20:24:56.702843    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 20:24:56.703313    6560 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem -> /usr/share/ca-certificates/13756.pem
	I0429 20:24:56.704314    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /usr/share/ca-certificates/137562.pem
	I0429 20:24:56.705315    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:24:56.758912    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 20:24:56.809584    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:24:56.860874    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:24:56.918708    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:24:56.969377    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:24:57.018903    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:24:57.070438    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:24:57.119823    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:24:57.168671    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 20:24:57.216697    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 20:24:57.263605    6560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:24:57.314590    6560 ssh_runner.go:195] Run: openssl version
	I0429 20:24:57.325614    6560 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 20:24:57.340061    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:24:57.374639    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.382273    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.394971    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:24:57.404667    6560 command_runner.go:130] > b5213941
	I0429 20:24:57.419162    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:24:57.454540    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 20:24:57.494441    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.501867    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.502224    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.517134    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 20:24:57.527174    6560 command_runner.go:130] > 51391683
	I0429 20:24:57.544472    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 20:24:57.579789    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 20:24:57.613535    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622605    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.622696    6560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.637764    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 20:24:57.649176    6560 command_runner.go:130] > 3ec20f2e
	I0429 20:24:57.665410    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:24:57.708796    6560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:24:57.716466    6560 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717133    6560 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 20:24:57.717510    6560 kubeadm.go:391] StartCluster: {Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:24:57.729105    6560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 20:24:57.771112    6560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 20:24:57.792952    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0429 20:24:57.793448    6560 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0429 20:24:57.807601    6560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:24:57.837965    6560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 20:24:57.856146    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 20:24:57.856820    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:24:57.856820    6560 kubeadm.go:156] found existing configuration files:
	
	I0429 20:24:57.872870    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:24:57.892109    6560 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.892549    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:24:57.905782    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:24:57.939062    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:24:57.957607    6560 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.957753    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:24:57.972479    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:24:58.006849    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.025918    6560 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.025918    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:24:58.039054    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:24:58.072026    6560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:24:58.092314    6560 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.092673    6560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:24:58.105776    6560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:24:58.124274    6560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:24:58.562957    6560 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:24:58.562957    6560 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:25:12.186137    6560 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186137    6560 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:25:12.186277    6560 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 20:25:12.186320    6560 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:25:12.186516    6560 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:25:12.186548    6560 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.186548    6560 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:25:12.187085    6560 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.187131    6560 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:25:12.190071    6560 out.go:204]   - Generating certificates and keys ...
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:25:12.190071    6560 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190071    6560 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:25:12.190667    6560 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0429 20:25:12.190717    6560 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.190717    6560 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 20:25:12.191251    6560 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191251    6560 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.191715    6560 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.191715    6560 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-515700] and IPs [172.17.241.25 127.0.0.1 ::1]
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 20:25:12.192414    6560 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0429 20:25:12.192414    6560 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 20:25:12.193040    6560 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193086    6560 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:25:12.193143    6560 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193143    6560 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:25:12.193701    6560 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193701    6560 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:25:12.193843    6560 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:25:12.193843    6560 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.198949    6560 out.go:204]   - Booting up control plane ...
	I0429 20:25:12.193843    6560 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:25:12.199175    6560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199175    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:25:12.199855    6560 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:25:12.199910    6560 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 20:25:12.199910    6560 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:25:12.200494    6560 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200494    6560 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.020403644s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:25:12.200663    6560 kubeadm.go:309] [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201207    6560 command_runner.go:130] > [api-check] The API server is healthy after 7.502469982s
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:25:12.201443    6560 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:25:12.201443    6560 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.201443    6560 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:25:12.202201    6560 command_runner.go:130] > [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [mark-control-plane] Marking the node multinode-515700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:25:12.202201    6560 kubeadm.go:309] [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 out.go:204]   - Configuring RBAC rules ...
	I0429 20:25:12.202201    6560 command_runner.go:130] > [bootstrap-token] Using token: 37m7f9.ot94yshw4qor9i7b
	I0429 20:25:12.204361    6560 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.204361    6560 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:25:12.205328    6560 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.205328    6560 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:25:12.206433    6560 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:25:12.206433    6560 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:25:12.206433    6560 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206433    6560 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 20:25:12.206983    6560 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 20:25:12.206983    6560 kubeadm.go:309] 
	I0429 20:25:12.207142    6560 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0429 20:25:12.207181    6560 kubeadm.go:309] 
	I0429 20:25:12.207365    6560 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207404    6560 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:25:12.207464    6560 kubeadm.go:309] 
	I0429 20:25:12.207514    6560 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0429 20:25:12.207589    6560 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:25:12.207764    6560 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.207807    6560 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:25:12.208030    6560 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:25:12.208069    6560 kubeadm.go:309] 
	I0429 20:25:12.208230    6560 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208230    6560 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:25:12.208281    6560 kubeadm.go:309] 
	I0429 20:25:12.208375    6560 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208375    6560 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:25:12.208442    6560 kubeadm.go:309] 
	I0429 20:25:12.208643    6560 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0429 20:25:12.208733    6560 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:25:12.208874    6560 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.208936    6560 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:25:12.209014    6560 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209014    6560 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:25:12.209014    6560 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:25:12.209014    6560 kubeadm.go:309] 
	I0429 20:25:12.209735    6560 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209735    6560 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a \
	I0429 20:25:12.209931    6560 command_runner.go:130] > 	--control-plane 
	I0429 20:25:12.209931    6560 kubeadm.go:309] 	--control-plane 
	I0429 20:25:12.210277    6560 kubeadm.go:309] 
	I0429 20:25:12.210538    6560 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:25:12.210538    6560 kubeadm.go:309] 
	I0429 20:25:12.210726    6560 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210726    6560 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 37m7f9.ot94yshw4qor9i7b \
	I0429 20:25:12.210937    6560 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dbd1ba3e6c308c29d9b5e6e332a76a5e62dde8069e83c0d19acc2634735dfa1a 
	I0429 20:25:12.210937    6560 cni.go:84] Creating CNI manager for ""
	I0429 20:25:12.211197    6560 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 20:25:12.215717    6560 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 20:25:12.234164    6560 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 20:25:12.242817    6560 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 20:25:12.242817    6560 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 20:25:12.242817    6560 command_runner.go:130] > Access: 2024-04-29 20:23:14.801002600 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] > Change: 2024-04-29 20:23:06.257000000 +0000
	I0429 20:25:12.242817    6560 command_runner.go:130] >  Birth: -
	I0429 20:25:12.242817    6560 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 20:25:12.242817    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 20:25:12.301387    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 20:25:13.060621    6560 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > serviceaccount/kindnet created
	I0429 20:25:13.060707    6560 command_runner.go:130] > daemonset.apps/kindnet created
	I0429 20:25:13.060707    6560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-515700 minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=multinode-515700 minikube.k8s.io/primary=true
	I0429 20:25:13.078545    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.092072    6560 command_runner.go:130] > -16
	I0429 20:25:13.092113    6560 ops.go:34] apiserver oom_adj: -16
	I0429 20:25:13.290753    6560 command_runner.go:130] > node/multinode-515700 labeled
	I0429 20:25:13.292700    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0429 20:25:13.306335    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.426974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:13.819653    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:13.947766    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.320587    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.442246    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:14.822864    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:14.943107    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.309117    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.432718    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:15.814070    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:15.933861    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.317878    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.440680    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:16.819594    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:16.942387    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.322995    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.435199    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:17.809136    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:17.932465    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.308164    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.429047    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:18.808817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:18.928476    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.310090    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.432479    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:19.815590    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:19.929079    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.321723    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.442512    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:20.819466    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:20.933742    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.309314    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.424974    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:21.811819    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:21.952603    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.316794    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.432125    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:22.808890    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:22.925838    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.310021    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.434432    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:23.819369    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:23.948876    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.307817    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.457947    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:24.818980    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:24.932003    6560 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 20:25:25.308659    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:25:25.488149    6560 command_runner.go:130] > NAME      SECRETS   AGE
	I0429 20:25:25.488217    6560 command_runner.go:130] > default   0         1s
	I0429 20:25:25.489686    6560 kubeadm.go:1107] duration metric: took 12.4288824s to wait for elevateKubeSystemPrivileges
	W0429 20:25:25.489686    6560 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:25:25.489686    6560 kubeadm.go:393] duration metric: took 27.7719601s to StartCluster
	I0429 20:25:25.490694    6560 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.490694    6560 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:25.491677    6560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:25.493697    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 20:25:25.493697    6560 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 20:25:25.498680    6560 out.go:177] * Verifying Kubernetes components...
	I0429 20:25:25.493697    6560 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:25:25.494664    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:25.504657    6560 addons.go:69] Setting storage-provisioner=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:69] Setting default-storageclass=true in profile "multinode-515700"
	I0429 20:25:25.504657    6560 addons.go:234] Setting addon storage-provisioner=true in "multinode-515700"
	I0429 20:25:25.504657    6560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-515700"
	I0429 20:25:25.504657    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.506662    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:25.520673    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:25:25.944109    6560 command_runner.go:130] > apiVersion: v1
	I0429 20:25:25.944267    6560 command_runner.go:130] > data:
	I0429 20:25:25.944267    6560 command_runner.go:130] >   Corefile: |
	I0429 20:25:25.944367    6560 command_runner.go:130] >     .:53 {
	I0429 20:25:25.944367    6560 command_runner.go:130] >         errors
	I0429 20:25:25.944367    6560 command_runner.go:130] >         health {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            lameduck 5s
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         ready
	I0429 20:25:25.944367    6560 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            pods insecure
	I0429 20:25:25.944367    6560 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0429 20:25:25.944367    6560 command_runner.go:130] >            ttl 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         prometheus :9153
	I0429 20:25:25.944367    6560 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0429 20:25:25.944367    6560 command_runner.go:130] >            max_concurrent 1000
	I0429 20:25:25.944367    6560 command_runner.go:130] >         }
	I0429 20:25:25.944367    6560 command_runner.go:130] >         cache 30
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loop
	I0429 20:25:25.944367    6560 command_runner.go:130] >         reload
	I0429 20:25:25.944367    6560 command_runner.go:130] >         loadbalance
	I0429 20:25:25.944367    6560 command_runner.go:130] >     }
	I0429 20:25:25.944367    6560 command_runner.go:130] > kind: ConfigMap
	I0429 20:25:25.944367    6560 command_runner.go:130] > metadata:
	I0429 20:25:25.944367    6560 command_runner.go:130] >   creationTimestamp: "2024-04-29T20:25:11Z"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   name: coredns
	I0429 20:25:25.944367    6560 command_runner.go:130] >   namespace: kube-system
	I0429 20:25:25.944367    6560 command_runner.go:130] >   resourceVersion: "265"
	I0429 20:25:25.944367    6560 command_runner.go:130] >   uid: af2c186a-a14a-4671-8545-05c5ef5d4a89
	I0429 20:25:25.949389    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 20:25:26.023682    6560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:25:26.408680    6560 command_runner.go:130] > configmap/coredns replaced
	I0429 20:25:26.414254    6560 start.go:946] {"host.minikube.internal": 172.17.240.1} host record injected into CoreDNS's ConfigMap
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.415675    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:26.417677    6560 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 20:25:26.417677    6560 node_ready.go:35] waiting up to 6m0s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.418688    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.418688    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.418688    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.435291    6560 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.437034    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.438334    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.438430    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Audit-Id: a2ae57e5-53a3-4342-ad5c-c2149e87ef04
	I0429 20:25:26.438524    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438430    6560 round_trippers.go:580]     Audit-Id: 2e6b22a8-9874-417c-a6a5-f7b7437121f7
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438607    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.438692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.438796    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.438909    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.439086    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:26.440203    6560 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"391","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.440298    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.440406    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.440406    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.440519    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:26.440519    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.459913    6560 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 20:25:26.459962    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.459962    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Audit-Id: 9ca07d91-957f-4992-9642-97b01e07dde3
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.459962    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.459962    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"393","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.918339    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:26.918339    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918339    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918339    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.918300    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 20:25:26.918498    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:26.918580    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:26.918580    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.928264    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:26.928264    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:26.928264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Audit-Id: 70383541-35df-461a-b4fb-41bd3b56f11d
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.928809    6560 round_trippers.go:580]     Content-Length: 291
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:26 GMT
	I0429 20:25:26.928890    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.928948    6560 round_trippers.go:580]     Audit-Id: e628428d-1384-4709-a32e-084c9dfec614
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:26.929077    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:26.929077    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:26.929164    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5b3f6901-fc6a-4c22-a903-5c18e1daf72a","resourceVersion":"404","creationTimestamp":"2024-04-29T20:25:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0429 20:25:26.929400    6560 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-515700" context rescaled to 1 replicas
	I0429 20:25:26.929400    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.426913    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.426913    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.426913    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.426913    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.430795    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:27.430795    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Audit-Id: e4e6b2b1-e008-4f2a-bae4-3596fce97666
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.430887    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.430887    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.430996    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.431340    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.788213    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:27.789217    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.789348    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:27.792426    6560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:25:27.791141    6560 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 20:25:27.795103    6560 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:27.795205    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:25:27.795205    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.795205    6560 kapi.go:59] client config for multinode-515700: &rest.Config{Host:"https://172.17.241.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-515700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 20:25:27.795924    6560 addons.go:234] Setting addon default-storageclass=true in "multinode-515700"
	I0429 20:25:27.795924    6560 host.go:66] Checking if "multinode-515700" exists ...
	I0429 20:25:27.796802    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:27.922993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:27.923088    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:27.923175    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:27.923175    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:27.929435    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:27.929435    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:27.929545    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:27.929545    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:27.929638    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:27 GMT
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Audit-Id: 8ef77f9f-d18f-4fa7-ab77-85c137602c84
	I0429 20:25:27.929638    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:27.930046    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.432611    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.432611    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.432611    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.432611    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.441320    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:28.441862    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Audit-Id: d32cd9f8-494c-4a69-b028-606c7d354657
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.441862    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.441951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.442308    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:28.442914    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:28.927674    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:28.927674    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:28.927674    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:28.927897    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:28.933213    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:28.933794    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:28.933794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:28 GMT
	I0429 20:25:28.933794    6560 round_trippers.go:580]     Audit-Id: 75d40b2c-c2ed-4221-9361-88591791a649
	I0429 20:25:28.934208    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.422724    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.422898    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.422898    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.422975    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.426431    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:29.426876    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Audit-Id: dde47b6c-069b-408d-a5c6-0a2ea7439643
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.426876    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.426876    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.427261    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:29.918308    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:29.918308    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:29.918308    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:29.918407    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:29.921072    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:29.921072    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:29.921072    6560 round_trippers.go:580]     Audit-Id: d4643df6-68ad-4c4c-9604-a5a4d019fba1
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:29.922076    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:29.922076    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:29 GMT
	I0429 20:25:29.922076    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.057057    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.057466    6560 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:30.057636    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:25:30.057750    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700 ).state
	I0429 20:25:30.145026    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:30.145306    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:30.424041    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.424310    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.424310    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.424310    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.428606    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.429051    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.429051    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.429263    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Audit-Id: 2c59a467-8079-41ed-ac1d-f96dd660d343
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.429290    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.429435    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.931993    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:30.931993    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:30.931993    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:30.931993    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:30.936635    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:30.936635    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:30.937644    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:30 GMT
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Audit-Id: 9214de5b-8221-4c68-b6b9-a92fe7d41fd1
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:30.937686    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:30.937686    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:30.938175    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:30.939066    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:31.423866    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.423866    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.423866    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.423988    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.427054    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.427827    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.427827    6560 round_trippers.go:580]     Audit-Id: 5f66acb8-ef38-4220-83b6-6e87fbec6f58
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.427869    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.427869    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.427869    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:31.932664    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:31.932664    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:31.932761    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:31.932761    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:31.936680    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:31.936680    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Audit-Id: f9fb721e-ccaf-4e33-ac69-8ed840761191
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:31.936680    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:31.936680    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:31 GMT
	I0429 20:25:31.937009    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.312723    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.313297    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700 ).networkadapters[0]).ipaddresses[0]
	I0429 20:25:32.424680    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.424953    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.424953    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.424953    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.428624    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:32.428906    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.428906    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Audit-Id: d3a39f3a-571d-46c0-a442-edf136da8a11
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.428972    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.428972    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.429531    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:32.857491    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:32.858444    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:32.926226    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:32.926317    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:32.926393    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:32.926393    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:32.929204    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:32.929583    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:32.929583    6560 round_trippers.go:580]     Audit-Id: 55fc987d-65c0-4ac8-95d2-7fa4185e179b
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:32.929673    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:32.929734    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:32.929734    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:32 GMT
	I0429 20:25:32.930327    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.034553    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:25:33.425759    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.425833    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.425833    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.425833    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.428624    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.429656    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Audit-Id: d581fce7-8906-48d7-8e13-2d1aba9dec04
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.429656    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.429656    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.429916    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:33.430438    6560 node_ready.go:53] node "multinode-515700" has status "Ready":"False"
	I0429 20:25:33.930984    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:33.931053    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:33.931053    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:33.931053    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:33.933717    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:33.933717    6560 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:33.933717    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:33 GMT
	I0429 20:25:33.933717    6560 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Audit-Id: 680ed792-db71-4b29-abb9-40f7154e8b1e
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:33.933717    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:33.933717    6560 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 20:25:33.933717    6560 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0429 20:25:33.933717    6560 command_runner.go:130] > pod/storage-provisioner created
	I0429 20:25:33.933717    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.428102    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.428102    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.428102    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.428102    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.431722    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:34.432624    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.432624    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.432624    6560 round_trippers.go:580]     Audit-Id: 86cc0608-3000-42b0-9ce8-4223e32d60c3
	I0429 20:25:34.432684    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.433082    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:34.932029    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:34.932316    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:34.932316    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:34.932316    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:34.936749    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:34.936749    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Audit-Id: 0e63a4db-3dd4-4e74-8b79-c019b6b97e89
	I0429 20:25:34.936749    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:34.937149    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:34.937149    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:34 GMT
	I0429 20:25:34.937415    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"372","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0429 20:25:35.024893    6560 main.go:141] libmachine: [stdout =====>] : 172.17.241.25
	
	I0429 20:25:35.025151    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:35.025317    6560 sshutil.go:53] new ssh client: &{IP:172.17.241.25 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700\id_rsa Username:docker}
	I0429 20:25:35.170634    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:25:35.371184    6560 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0429 20:25:35.371418    6560 round_trippers.go:463] GET https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 20:25:35.371571    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.371571    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.371571    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.380781    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:35.381213    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Audit-Id: 31f5e265-3d38-4520-88d0-33f47325189c
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.381213    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Content-Length: 1273
	I0429 20:25:35.381213    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.381380    6560 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 20:25:35.382106    6560 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.382183    6560 round_trippers.go:463] PUT https://172.17.241.25:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 20:25:35.382183    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.382269    6560 round_trippers.go:473]     Content-Type: application/json
	I0429 20:25:35.382269    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.390758    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.390758    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.390758    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Audit-Id: 4dbb716e-2d97-4c38-b342-f63e7d38a4d0
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.391020    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.391020    6560 round_trippers.go:580]     Content-Length: 1220
	I0429 20:25:35.391190    6560 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d5f1b4b0-4b0c-4d75-82ce-63633f3b20d9","resourceVersion":"425","creationTimestamp":"2024-04-29T20:25:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T20:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 20:25:35.395279    6560 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 20:25:35.397530    6560 addons.go:505] duration metric: took 9.9037568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 20:25:35.421733    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.421733    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.421733    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.421733    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.452743    6560 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 20:25:35.452743    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.452743    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.452743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Audit-Id: 316d0393-7ba5-4629-87cb-7ae54d0ea965
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.453374    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.454477    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.455068    6560 node_ready.go:49] node "multinode-515700" has status "Ready":"True"
	I0429 20:25:35.455148    6560 node_ready.go:38] duration metric: took 9.0374019s for node "multinode-515700" to be "Ready" ...
	I0429 20:25:35.455148    6560 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:35.455213    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:35.455213    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.455213    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.455213    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.473128    6560 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 20:25:35.473128    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Audit-Id: 81e159c0-b703-47ba-a9f3-82cc907b8705
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.473128    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.473128    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.475820    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"431","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0429 20:25:35.481714    6560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:35.482325    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.482379    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.482379    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.482432    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.491093    6560 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 20:25:35.491093    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Audit-Id: a2eb7ca2-d415-4a7c-a1f0-1ac743bd8f82
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.491835    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.491835    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.492090    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.493335    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.493335    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.493335    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.493419    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.496084    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:35.496084    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.496084    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:35 GMT
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Audit-Id: f61c97ad-ee7a-4666-9244-d7d2091b5d09
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.497097    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.497097    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.497131    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.497332    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:35.991323    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:35.991323    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.991323    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.991323    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.995451    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:35.995451    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Audit-Id: faa8a1a4-279f-4dc3-99c8-8c3b9e9ed746
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:35.995451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:35.995451    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:35.996592    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:35.997239    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:35.997292    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:35.997292    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:35.997292    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:35.999987    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:35.999987    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.000055    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.000055    6560 round_trippers.go:580]     Audit-Id: 070c7fff-f707-4b9a-9aef-031cedc68a8c
	I0429 20:25:36.000411    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.483004    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.483004    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.483004    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.483004    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.488152    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 20:25:36.488152    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.488152    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.488678    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.488678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.488743    6560 round_trippers.go:580]     Audit-Id: fb5cc675-b39d-4cb0-ba8c-24140b3d95e8
	I0429 20:25:36.489818    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.490926    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.490926    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.490985    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.490985    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.494654    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:36.494654    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.494654    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:36 GMT
	I0429 20:25:36.494654    6560 round_trippers.go:580]     Audit-Id: fe6d880a-4cf8-4b10-8c7f-debde123eafc
	I0429 20:25:36.495423    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:36.991643    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:36.991643    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.991643    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.991855    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:36.996384    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:36.996384    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Audit-Id: 933a6dd5-a0f7-4380-8189-3e378a8a620d
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:36.996384    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:36.996384    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:36.997332    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"435","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0429 20:25:36.997760    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:36.997760    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:36.997760    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:36.997760    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.000889    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.000889    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.001211    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.001211    6560 round_trippers.go:580]     Audit-Id: 0342e743-45a6-4fd7-97be-55a766946396
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.001274    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.001274    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.001759    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.495712    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-drcsj
	I0429 20:25:37.495712    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.495712    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.495712    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.508671    6560 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 20:25:37.508671    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.508671    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Audit-Id: d30c6154-a41b-4a0d-976f-d19f40e67223
	I0429 20:25:37.508671    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.508671    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0429 20:25:37.510663    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.510663    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.510663    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.510663    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.513686    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.513686    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Audit-Id: 397b83a5-95f9-4df8-a76b-042ecc96922c
	I0429 20:25:37.513686    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.514662    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.514662    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.514662    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.514662    6560 pod_ready.go:92] pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.514662    6560 pod_ready.go:81] duration metric: took 2.0329329s for pod "coredns-7db6d8ff4d-drcsj" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.514662    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-515700
	I0429 20:25:37.514662    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.514662    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.514662    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.517691    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.517691    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.518005    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Audit-Id: df53f071-06ed-4797-a51b-7d01b84cac86
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.518005    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.518412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-515700","namespace":"kube-system","uid":"85f2dc9a-17b5-413c-ab83-d3cbe955571e","resourceVersion":"319","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.241.25:2379","kubernetes.io/config.hash":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.mirror":"eaa086b1c8504ed49841dd571515d66e","kubernetes.io/config.seen":"2024-04-29T20:25:11.718525866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0429 20:25:37.519044    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.519044    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.519124    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.519124    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.521788    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.521788    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.521788    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.521788    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Audit-Id: ee5fdb3e-9869-4cd7-996a-a25b453aeb6b
	I0429 20:25:37.521944    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.521944    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.522769    6560 pod_ready.go:92] pod "etcd-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.522844    6560 pod_ready.go:81] duration metric: took 8.1819ms for pod "etcd-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.522844    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.523015    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-515700
	I0429 20:25:37.523015    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.523079    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.523079    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.525575    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.525575    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Audit-Id: cd9d851c-f606-48c9-8da3-3d194ab5464f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.525575    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.525575    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.526015    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-515700","namespace":"kube-system","uid":"f5a212eb-87a9-476a-981a-9f31731f39e6","resourceVersion":"312","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.241.25:8443","kubernetes.io/config.hash":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.mirror":"d8eb7a1b83ec3e88b473a807ea65d596","kubernetes.io/config.seen":"2024-04-29T20:25:11.718530866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0429 20:25:37.526356    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.526356    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.526356    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.526356    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.535954    6560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 20:25:37.535954    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.535954    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Audit-Id: 018aa21f-d408-4777-b54c-eb7aa2295a34
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.535954    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.536470    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.536974    6560 pod_ready.go:92] pod "kube-apiserver-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.537034    6560 pod_ready.go:81] duration metric: took 14.0881ms for pod "kube-apiserver-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537034    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.537183    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-515700
	I0429 20:25:37.537276    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.537297    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.537297    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.539964    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 20:25:37.539964    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Audit-Id: d3232756-fc07-4b33-a3b5-989d2932cec4
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.540692    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.540692    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.541274    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-515700","namespace":"kube-system","uid":"2c9ba563-c2af-45b7-bc1e-bf39759a339b","resourceVersion":"315","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.mirror":"4c48107558ee4dbc6e96f0df56010a58","kubernetes.io/config.seen":"2024-04-29T20:25:11.718532066Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0429 20:25:37.541935    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.541935    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.541935    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.541935    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.555960    6560 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 20:25:37.555960    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Audit-Id: 2d117219-3b1a-47fe-99a4-7e5aea7e84d3
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.555960    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.555960    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.555960    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.555960    6560 pod_ready.go:92] pod "kube-controller-manager-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.555960    6560 pod_ready.go:81] duration metric: took 18.9251ms for pod "kube-controller-manager-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.555960    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.556943    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gx5x
	I0429 20:25:37.556943    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.556943    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.556943    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.559965    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.560477    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.560477    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.560477    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Audit-Id: 14e6d1be-eac6-4f20-9621-b409c951fae1
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.560566    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.560781    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6gx5x","generateName":"kube-proxy-","namespace":"kube-system","uid":"886ac698-7e9b-431b-b822-577331b02f41","resourceVersion":"407","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"027f1d05-009f-4199-81e9-45b0a2d3710f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"027f1d05-009f-4199-81e9-45b0a2d3710f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0429 20:25:37.561552    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.561581    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.561581    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.561581    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.567713    6560 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 20:25:37.567713    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.567713    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.567713    6560 round_trippers.go:580]     Audit-Id: 678df177-6944-4d30-b889-62528c06bab2
	I0429 20:25:37.567713    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.568391    6560 pod_ready.go:92] pod "kube-proxy-6gx5x" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.568391    6560 pod_ready.go:81] duration metric: took 12.4313ms for pod "kube-proxy-6gx5x" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.568391    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.701559    6560 request.go:629] Waited for 132.9214ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701779    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-515700
	I0429 20:25:37.701853    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.701853    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.701853    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.705314    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.706129    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.706129    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.706129    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.706183    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.706183    6560 round_trippers.go:580]     Audit-Id: 4fb010ad-4d68-4aa0-9ba4-f68d04faa9e8
	I0429 20:25:37.706412    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-515700","namespace":"kube-system","uid":"096d3e94-25ba-49b3-b329-6fb47fc88f25","resourceVersion":"334","creationTimestamp":"2024-04-29T20:25:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.mirror":"53b8f763ca4aeac1117873e3808cadb4","kubernetes.io/config.seen":"2024-04-29T20:25:11.718533166Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0429 20:25:37.905204    6560 request.go:629] Waited for 197.8802ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes/multinode-515700
	I0429 20:25:37.905322    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.905322    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.905466    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.909057    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 20:25:37.909159    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Audit-Id: a6cecf7e-83ad-4d5f-8cbb-a65ced7e83ce
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.909159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.909159    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.909286    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T20:25:08Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0429 20:25:37.909697    6560 pod_ready.go:92] pod "kube-scheduler-multinode-515700" in "kube-system" namespace has status "Ready":"True"
	I0429 20:25:37.909697    6560 pod_ready.go:81] duration metric: took 341.3037ms for pod "kube-scheduler-multinode-515700" in "kube-system" namespace to be "Ready" ...
	I0429 20:25:37.909697    6560 pod_ready.go:38] duration metric: took 2.4545299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:25:37.909697    6560 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:25:37.923721    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:25:37.956142    6560 command_runner.go:130] > 2047
	I0429 20:25:37.956226    6560 api_server.go:72] duration metric: took 12.462433s to wait for apiserver process to appear ...
	I0429 20:25:37.956226    6560 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:25:37.956330    6560 api_server.go:253] Checking apiserver healthz at https://172.17.241.25:8443/healthz ...
	I0429 20:25:37.965150    6560 api_server.go:279] https://172.17.241.25:8443/healthz returned 200:
	ok
	I0429 20:25:37.965332    6560 round_trippers.go:463] GET https://172.17.241.25:8443/version
	I0429 20:25:37.965364    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:37.965364    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:37.965364    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:37.967124    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 20:25:37.967124    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:37 GMT
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Audit-Id: c3b17e5f-8eb5-4422-bcd1-48cea5831311
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:37.967124    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:37.967124    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:37.967423    6560 round_trippers.go:580]     Content-Length: 263
	I0429 20:25:37.967423    6560 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 20:25:37.967530    6560 api_server.go:141] control plane version: v1.30.0
	I0429 20:25:37.967530    6560 api_server.go:131] duration metric: took 11.2306ms to wait for apiserver health ...
	I0429 20:25:37.967629    6560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:25:38.109818    6560 request.go:629] Waited for 142.1878ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110201    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.110256    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.110275    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.110275    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.118070    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.118070    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.118070    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.118070    6560 round_trippers.go:580]     Audit-Id: 557b3073-d14e-4919-8133-995d5b042d22
	I0429 20:25:38.119823    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.123197    6560 system_pods.go:59] 8 kube-system pods found
	I0429 20:25:38.123197    6560 system_pods.go:61] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.123197    6560 system_pods.go:61] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.123197    6560 system_pods.go:74] duration metric: took 155.566ms to wait for pod list to return data ...
	I0429 20:25:38.123197    6560 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:25:38.295950    6560 request.go:629] Waited for 172.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/default/serviceaccounts
	I0429 20:25:38.296157    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.296300    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.296300    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.300424    6560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 20:25:38.300424    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Content-Length: 261
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.300613    6560 round_trippers.go:580]     Audit-Id: 7466bf5b-fa07-4a6b-bc06-274738fc9259
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.300674    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.300674    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.300674    6560 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"13c4332f-9236-4f04-9e46-f5a98bc3d731","resourceVersion":"343","creationTimestamp":"2024-04-29T20:25:24Z"}}]}
	I0429 20:25:38.300674    6560 default_sa.go:45] found service account: "default"
	I0429 20:25:38.300674    6560 default_sa.go:55] duration metric: took 177.4758ms for default service account to be created ...
	I0429 20:25:38.300674    6560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:25:38.498686    6560 request.go:629] Waited for 197.291ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.498782    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/namespaces/kube-system/pods
	I0429 20:25:38.499005    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.499005    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.499005    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.506756    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.507387    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Audit-Id: ffc5efdb-4263-4450-8ff2-c1bb3f979300
	I0429 20:25:38.507387    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.507485    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.507503    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.507503    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.508809    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-drcsj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"35a34648-701f-40b2-b391-6f400ce8ed45","resourceVersion":"446","creationTimestamp":"2024-04-29T20:25:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"e1b3671e-dd8a-4deb-ae27-ec03158ec879","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T20:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b3671e-dd8a-4deb-ae27-ec03158ec879\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0429 20:25:38.512231    6560 system_pods.go:86] 8 kube-system pods found
	I0429 20:25:38.512305    6560 system_pods.go:89] "coredns-7db6d8ff4d-drcsj" [35a34648-701f-40b2-b391-6f400ce8ed45] Running
	I0429 20:25:38.512305    6560 system_pods.go:89] "etcd-multinode-515700" [85f2dc9a-17b5-413c-ab83-d3cbe955571e] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kindnet-lt84t" [a7fc5a24-eb92-47ad-af92-603fc4fd5910] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-apiserver-multinode-515700" [f5a212eb-87a9-476a-981a-9f31731f39e6] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-controller-manager-multinode-515700" [2c9ba563-c2af-45b7-bc1e-bf39759a339b] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-proxy-6gx5x" [886ac698-7e9b-431b-b822-577331b02f41] Running
	I0429 20:25:38.512378    6560 system_pods.go:89] "kube-scheduler-multinode-515700" [096d3e94-25ba-49b3-b329-6fb47fc88f25] Running
	I0429 20:25:38.512451    6560 system_pods.go:89] "storage-provisioner" [ac7fbd67-6f97-4995-a9f9-64324ff5adad] Running
	I0429 20:25:38.512451    6560 system_pods.go:126] duration metric: took 211.7756ms to wait for k8s-apps to be running ...
	I0429 20:25:38.512451    6560 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:25:38.526027    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:25:38.555837    6560 system_svc.go:56] duration metric: took 43.3852ms WaitForService to wait for kubelet
	I0429 20:25:38.555837    6560 kubeadm.go:576] duration metric: took 13.0620394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:25:38.556007    6560 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:25:38.701455    6560 request.go:629] Waited for 145.1917ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701896    6560 round_trippers.go:463] GET https://172.17.241.25:8443/api/v1/nodes
	I0429 20:25:38.701917    6560 round_trippers.go:469] Request Headers:
	I0429 20:25:38.701917    6560 round_trippers.go:473]     Accept: application/json, */*
	I0429 20:25:38.702032    6560 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 20:25:38.709221    6560 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 20:25:38.709221    6560 round_trippers.go:577] Response Headers:
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Audit-Id: 9241b2a0-c483-4bfb-8a19-8f5c9b610b53
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Content-Type: application/json
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1857eef5-0f4e-4a76-9ef7-d2446f63099f
	I0429 20:25:38.709221    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2ca6f27e-928a-42b7-aaca-fb915f4ee7b9
	I0429 20:25:38.709221    6560 round_trippers.go:580]     Date: Mon, 29 Apr 2024 20:25:38 GMT
	I0429 20:25:38.709221    6560 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-515700","uid":"2a96bf4d-5635-43b7-a5a1-59a9e9a695e8","resourceVersion":"430","creationTimestamp":"2024-04-29T20:25:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-515700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2cfd4287855d1061f3afd2cc80f438e391f2ea1e","minikube.k8s.io/name":"multinode-515700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T20_25_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0429 20:25:38.710061    6560 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:25:38.710061    6560 node_conditions.go:123] node cpu capacity is 2
	I0429 20:25:38.710061    6560 node_conditions.go:105] duration metric: took 154.0529ms to run NodePressure ...
	I0429 20:25:38.710061    6560 start.go:240] waiting for startup goroutines ...
	I0429 20:25:38.710061    6560 start.go:245] waiting for cluster config update ...
	I0429 20:25:38.710061    6560 start.go:254] writing updated cluster config ...
	I0429 20:25:38.717493    6560 out.go:177] 
	I0429 20:25:38.721129    6560 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:25:38.729134    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.735840    6560 out.go:177] * Starting "multinode-515700-m02" worker node in "multinode-515700" cluster
	I0429 20:25:38.738518    6560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 20:25:38.738518    6560 cache.go:56] Caching tarball of preloaded images
	I0429 20:25:38.738983    6560 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 20:25:38.739240    6560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 20:25:38.739481    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:25:38.745029    6560 start.go:360] acquireMachinesLock for multinode-515700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:25:38.745029    6560 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-515700-m02"
	I0429 20:25:38.745029    6560 start.go:93] Provisioning new machine with config: &{Name:multinode-515700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-515700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.241.25 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 20:25:38.745575    6560 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 20:25:38.748852    6560 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:25:38.748852    6560 start.go:159] libmachine.API.Create for "multinode-515700" (driver="hyperv")
	I0429 20:25:38.748852    6560 client.go:168] LocalClient.Create starting
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:38.749822    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 20:25:40.745357    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:40.746212    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 20:25:42.605453    6560 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:42.606031    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:44.191146    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:47.992432    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:47.992702    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:47.996014    6560 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:25:48.551162    6560 main.go:141] libmachine: Creating SSH key...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: Creating VM...
	I0429 20:25:48.768786    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 20:25:51.873374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:51.874174    6560 main.go:141] libmachine: Using switch "Default Switch"
	I0429 20:25:51.874221    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 20:25:53.736899    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:53.736899    6560 main.go:141] libmachine: Creating VHD
	I0429 20:25:53.737514    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D65FFD0C-285E-44D0-8723-21544BDDE71A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 20:25:57.515848    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing magic tar header
	I0429 20:25:57.515848    6560 main.go:141] libmachine: Writing SSH key tar header
	I0429 20:25:57.529054    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:00.733433    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:00.734035    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd' -SizeBytes 20000MB
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:03.313569    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:03.314283    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-515700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:07.189061    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-515700-m02 -DynamicMemoryEnabled $false
	I0429 20:26:09.480100    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:09.480543    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-515700-m02 -Count 2
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:11.716608    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:11.716979    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\boot2docker.iso'
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:14.375944    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:14.377298    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-515700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\disk.vhd'
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:17.090839    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:17.090909    6560 main.go:141] libmachine: Starting VM...
	I0429 20:26:17.090909    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-515700-m02
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:20.223074    6560 main.go:141] libmachine: Waiting for host to start...
	I0429 20:26:20.223074    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:22.526884    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:22.527096    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:25.111047    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:26.113296    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:28.339189    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:28.339433    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:30.953587    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:30.953628    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:31.955478    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:34.197688    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:34.197831    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:34.197901    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:36.805175    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:37.817016    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:40.071715    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:42.682666    6560 main.go:141] libmachine: [stdout =====>] : 
	I0429 20:26:42.683603    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:43.685897    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:45.906226    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:48.604877    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:48.604915    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:48.604999    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:50.794876    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:50.795093    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:50.795407    6560 machine.go:94] provisionDockerMachine start ...
	I0429 20:26:50.795407    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:52.992195    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:52.992243    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:52.992331    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:26:55.622301    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:55.630552    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:55.641728    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:26:55.642758    6560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:26:55.769182    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:26:55.769182    6560 buildroot.go:166] provisioning hostname "multinode-515700-m02"
	I0429 20:26:55.769333    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:26:57.942857    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:26:57.943721    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:26:57.943789    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:00.610012    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:00.610498    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:00.617342    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:00.618022    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:00.618022    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-515700-m02 && echo "multinode-515700-m02" | sudo tee /etc/hostname
	I0429 20:27:00.774430    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-515700-m02
	
	I0429 20:27:00.775391    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:02.970796    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:02.971352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:02.971577    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:05.633190    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:05.640782    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:05.640782    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:05.640782    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-515700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-515700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-515700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:27:05.779330    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:27:05.779330    6560 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 20:27:05.779435    6560 buildroot.go:174] setting up certificates
	I0429 20:27:05.779435    6560 provision.go:84] configureAuth start
	I0429 20:27:05.779531    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:07.939052    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:07.939785    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:10.607752    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:10.608236    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:10.608319    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:12.804913    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:15.428095    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:15.428095    6560 provision.go:143] copyHostCerts
	I0429 20:27:15.429066    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 20:27:15.429066    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 20:27:15.429066    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 20:27:15.429626    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 20:27:15.430936    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 20:27:15.431366    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 20:27:15.431366    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 20:27:15.431875    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 20:27:15.432822    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 20:27:15.433064    6560 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 20:27:15.433064    6560 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 20:27:15.433498    6560 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 20:27:15.434807    6560 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-515700-m02 san=[127.0.0.1 172.17.253.145 localhost minikube multinode-515700-m02]
	I0429 20:27:15.511954    6560 provision.go:177] copyRemoteCerts
	I0429 20:27:15.527105    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:27:15.527105    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:17.688855    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:20.368198    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:20.368587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:20.368930    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:20.467819    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9406764s)
	I0429 20:27:20.468832    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 20:27:20.469887    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 20:27:20.524889    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 20:27:20.525559    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 20:27:20.578020    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 20:27:20.578217    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:27:20.634803    6560 provision.go:87] duration metric: took 14.8552541s to configureAuth
	I0429 20:27:20.634874    6560 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:27:20.635533    6560 config.go:182] Loaded profile config "multinode-515700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 20:27:20.635638    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:22.779478    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:22.779762    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:25.421346    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:25.427345    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:25.427345    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:25.428345    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 20:27:25.562050    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 20:27:25.562195    6560 buildroot.go:70] root file system type: tmpfs
	I0429 20:27:25.562515    6560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 20:27:25.562592    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:27.769370    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:30.404141    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:30.405195    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:30.412105    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:30.413171    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:30.413700    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.241.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 20:27:30.578477    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.241.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 20:27:30.578477    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:32.772358    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:32.772580    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:35.458587    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:35.465933    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:35.466426    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:35.466509    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 20:27:37.701893    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 20:27:37.701981    6560 machine.go:97] duration metric: took 46.9062133s to provisionDockerMachine
	I0429 20:27:37.702052    6560 client.go:171] duration metric: took 1m58.9522849s to LocalClient.Create
	I0429 20:27:37.702194    6560 start.go:167] duration metric: took 1m58.9524269s to libmachine.API.Create "multinode-515700"
	I0429 20:27:37.702194    6560 start.go:293] postStartSetup for "multinode-515700-m02" (driver="hyperv")
	I0429 20:27:37.702194    6560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:27:37.716028    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:27:37.716028    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:39.888498    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:39.889355    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:39.889707    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:42.575511    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:42.576527    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:27:42.688245    6560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9721792s)
	I0429 20:27:42.703472    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:27:42.710185    6560 command_runner.go:130] > NAME=Buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 20:27:42.710391    6560 command_runner.go:130] > ID=buildroot
	I0429 20:27:42.710391    6560 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 20:27:42.710391    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 20:27:42.710562    6560 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:27:42.710562    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 20:27:42.710640    6560 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 20:27:42.712121    6560 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 20:27:42.712121    6560 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> /etc/ssl/certs/137562.pem
	I0429 20:27:42.725734    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:27:42.745571    6560 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 20:27:42.798223    6560 start.go:296] duration metric: took 5.0959902s for postStartSetup
	I0429 20:27:42.801718    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:44.984374    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:44.985225    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:47.629223    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:47.630520    6560 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-515700\config.json ...
	I0429 20:27:47.633045    6560 start.go:128] duration metric: took 2m8.8864784s to createHost
	I0429 20:27:47.633167    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:49.823309    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:49.823412    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:49.823495    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:52.524084    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:52.524183    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:52.530451    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:52.531204    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:52.531204    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:27:52.658970    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422472.660345683
	
	I0429 20:27:52.659208    6560 fix.go:216] guest clock: 1714422472.660345683
	I0429 20:27:52.659208    6560 fix.go:229] Guest: 2024-04-29 20:27:52.660345683 +0000 UTC Remote: 2024-04-29 20:27:47.6330452 +0000 UTC m=+346.394263801 (delta=5.027300483s)
	I0429 20:27:52.659208    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:54.832352    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:27:57.461861    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:27:57.461927    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:57.467747    6560 main.go:141] libmachine: Using SSH client type: native
	I0429 20:27:57.468699    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.145 22 <nil> <nil>}
	I0429 20:27:57.468699    6560 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714422472
	I0429 20:27:57.617018    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 20:27:52 UTC 2024
	
	I0429 20:27:57.617018    6560 fix.go:236] clock set: Mon Apr 29 20:27:52 UTC 2024
	 (err=<nil>)
	I0429 20:27:57.617018    6560 start.go:83] releasing machines lock for "multinode-515700-m02", held for 2m18.8709228s
	I0429 20:27:57.618122    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:27:59.795247    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:27:59.795912    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:02.475615    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:02.475867    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:02.479078    6560 out.go:177] * Found network options:
	I0429 20:28:02.481434    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.483990    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.486147    6560 out.go:177]   - NO_PROXY=172.17.241.25
	W0429 20:28:02.488513    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 20:28:02.490094    6560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 20:28:02.492090    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:28:02.492090    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:02.504078    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 20:28:02.504078    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-515700-m02 ).state
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:04.720534    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-515700-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 20:28:07.440744    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.440938    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.441026    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stdout =====>] : 172.17.253.145
	
	I0429 20:28:07.466623    6560 main.go:141] libmachine: [stderr =====>] : 
	I0429 20:28:07.467629    6560 sshutil.go:53] new ssh client: &{IP:172.17.253.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-515700-m02\id_rsa Username:docker}
	I0429 20:28:07.629032    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 20:28:07.630105    6560 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1379759s)
	I0429 20:28:07.630105    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0429 20:28:07.630229    6560 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1259881s)
	W0429 20:28:07.630229    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:28:07.649597    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:28:07.685721    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 20:28:07.685954    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:28:07.685954    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:07.686227    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:07.722613    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 20:28:07.736060    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 20:28:07.771561    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 20:28:07.793500    6560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 20:28:07.809715    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 20:28:07.846242    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.882404    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 20:28:07.918280    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 20:28:07.956186    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:28:07.994072    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 20:28:08.029701    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 20:28:08.067417    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 20:28:08.104772    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:28:08.126209    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 20:28:08.140685    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:28:08.181598    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:08.410362    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 20:28:08.449856    6560 start.go:494] detecting cgroup driver to use...
	I0429 20:28:08.466974    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Unit]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 20:28:08.492900    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 20:28:08.492900    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 20:28:08.492900    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitBurst=3
	I0429 20:28:08.492900    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 20:28:08.492900    6560 command_runner.go:130] > [Service]
	I0429 20:28:08.492900    6560 command_runner.go:130] > Type=notify
	I0429 20:28:08.492900    6560 command_runner.go:130] > Restart=on-failure
	I0429 20:28:08.492900    6560 command_runner.go:130] > Environment=NO_PROXY=172.17.241.25
	I0429 20:28:08.492900    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 20:28:08.492900    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 20:28:08.492900    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 20:28:08.492900    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 20:28:08.492900    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 20:28:08.492900    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 20:28:08.492900    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 20:28:08.492900    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 20:28:08.492900    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 20:28:08.492900    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 20:28:08.492900    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNOFILE=infinity
	I0429 20:28:08.492900    6560 command_runner.go:130] > LimitNPROC=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > LimitCORE=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 20:28:08.493891    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 20:28:08.493891    6560 command_runner.go:130] > TasksMax=infinity
	I0429 20:28:08.493891    6560 command_runner.go:130] > TimeoutStartSec=0
	I0429 20:28:08.493891    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 20:28:08.493891    6560 command_runner.go:130] > Delegate=yes
	I0429 20:28:08.493891    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 20:28:08.493891    6560 command_runner.go:130] > KillMode=process
	I0429 20:28:08.493891    6560 command_runner.go:130] > [Install]
	I0429 20:28:08.493891    6560 command_runner.go:130] > WantedBy=multi-user.target
	I0429 20:28:08.505928    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.548562    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:28:08.606977    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:28:08.652185    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.695349    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 20:28:08.785230    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 20:28:08.816602    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:28:08.853434    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 20:28:08.870019    6560 ssh_runner.go:195] Run: which cri-dockerd
	I0429 20:28:08.876256    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 20:28:08.890247    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 20:28:08.911471    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 20:28:08.962890    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 20:28:09.201152    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 20:28:09.397561    6560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 20:28:09.398166    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 20:28:09.451159    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:28:09.673084    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 20:29:10.809648    6560 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 20:29:10.809648    6560 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 20:29:10.809648    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1361028s)
	I0429 20:29:10.827248    6560 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.851677    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0429 20:29:10.852081    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 20:29:10.852173    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 20:29:10.852279    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 20:29:10.852319    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852344    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852432    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 20:29:10.852556    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 20:29:10.852660    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 20:29:10.852739    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	I0429 20:29:10.852786    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.852822    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.853789    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854027    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854100    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854118    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854142    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854175    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854216    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854247    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 20:29:10.854906    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	I0429 20:29:10.855026    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 20:29:10.855112    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 20:29:10.855146    6560 command_runner.go:130] > Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 20:29:10.865335    6560 out.go:177] 
	W0429 20:29:10.865335    6560 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:27:36 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.132717145Z" level=info msg="Starting up"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.134292152Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:36.136131460Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.173179730Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203487769Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203619069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203721770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203742470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.203906971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204086671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204373573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204505473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204547374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204577174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.204698774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.205204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208604792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208740593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.208954494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209168695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209290195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209455996Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.209557697Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238322428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238505829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238534329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238554329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238573229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.238716730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239310733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239527934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239663534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239688134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239706535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239723235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239738935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239755635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239772735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239789835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239842835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239879335Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239921136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239948236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.239990236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240009136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240024336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240039036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240052536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240067536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240166437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240181137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240195337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240209237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240226737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240251037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240266537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240280437Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240333737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240393838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240410938Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240423438Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240634439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240721639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.240741039Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241167741Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241343042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241406042Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:27:36 multinode-515700-m02 dockerd[679]: time="2024-04-29T20:27:36.241452543Z" level=info msg="containerd successfully booted in 0.070754s"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.213396150Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.254770228Z" level=info msg="Loading containers: start."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.547301295Z" level=info msg="Loading containers: done."
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571093782Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.571248184Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.700323684Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:27:37 multinode-515700-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:27:37 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:27:37.702313817Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:28:09 multinode-515700-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.704252788Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.706618717Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707209424Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707266525Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:28:09 multinode-515700-m02 dockerd[673]: time="2024-04-29T20:28:09.707296225Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:28:10 multinode-515700-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:28:10 multinode-515700-m02 dockerd[1020]: time="2024-04-29T20:28:10.786889353Z" level=info msg="Starting up"
	Apr 29 20:29:10 multinode-515700-m02 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:29:10 multinode-515700-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 20:29:10.865335    6560 out.go:239] * 
	W0429 20:29:10.869400    6560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:29:10.876700    6560 out.go:177] 
	
	
	==> Docker <==
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:42:56 multinode-515700 dockerd[1325]: 2024/04/29 20:42:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:26 multinode-515700 dockerd[1325]: 2024/04/29 20:47:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:47:27 multinode-515700 dockerd[1325]: 2024/04/29 20:47:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:48:49 multinode-515700 dockerd[1325]: 2024/04/29 20:48:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:50:52 multinode-515700 dockerd[1325]: 2024/04/29 20:50:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:50:52 multinode-515700 dockerd[1325]: 2024/04/29 20:50:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:50:52 multinode-515700 dockerd[1325]: 2024/04/29 20:50:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:50:52 multinode-515700 dockerd[1325]: 2024/04/29 20:50:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:50:52 multinode-515700 dockerd[1325]: 2024/04/29 20:50:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:50:52 multinode-515700 dockerd[1325]: 2024/04/29 20:50:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 20:50:53 multinode-515700 dockerd[1325]: 2024/04/29 20:50:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	32c6f043cec2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Running             busybox                   0                   e1a58f6d29ec9       busybox-fc5497c4f-dv5v8
	15da1b832ef20       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   73ab97e30d3d0       coredns-7db6d8ff4d-drcsj
	b26e455e6f823       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   0274116a036cf       storage-provisioner
	11141cf0a01e5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Running             kindnet-cni               0                   5c226cf922db1       kindnet-lt84t
	8d116812e2fa7       a0bf559e280cf                                                                                         27 minutes ago      Running             kube-proxy                0                   c4e88976a7bb5       kube-proxy-6gx5x
	9b9ad8fbed853       c42f13656d0b2                                                                                         28 minutes ago      Running             kube-apiserver            0                   e1040c321d522       kube-apiserver-multinode-515700
	7748681b165fb       259c8277fcbbc                                                                                         28 minutes ago      Running             kube-scheduler            0                   ab47450efbe05       kube-scheduler-multinode-515700
	01f30fac305bc       3861cfcd7c04c                                                                                         28 minutes ago      Running             etcd                      0                   b5202cca492c4       etcd-multinode-515700
	c5de44f1f1066       c7aad43836fa5                                                                                         28 minutes ago      Running             kube-controller-manager   0                   4ae9818227910       kube-controller-manager-multinode-515700
	
	
	==> coredns [15da1b832ef2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 658b75f59357881579d818bea4574a099ffd8bf4e34cb2d6414c381890635887b0895574e607ab48d69c0bc2657640404a00a48de79c5b96ce27f6a68e70a912
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36587 - 14172 "HINFO IN 4725538422205950284.7962128480288568612. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062354244s
	[INFO] 10.244.0.3:46156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244102s
	[INFO] 10.244.0.3:48057 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.210765088s
	[INFO] 10.244.0.3:47676 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15403778s
	[INFO] 10.244.0.3:57534 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.237328274s
	[INFO] 10.244.0.3:38726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000345103s
	[INFO] 10.244.0.3:54844 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.04703092s
	[INFO] 10.244.0.3:51897 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000879808s
	[INFO] 10.244.0.3:57925 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122101s
	[INFO] 10.244.0.3:39997 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012692914s
	[INFO] 10.244.0.3:37301 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000333403s
	[INFO] 10.244.0.3:60294 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172702s
	[INFO] 10.244.0.3:33135 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000250902s
	[INFO] 10.244.0.3:46585 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141701s
	[INFO] 10.244.0.3:41280 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127902s
	[INFO] 10.244.0.3:46602 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000220001s
	[INFO] 10.244.0.3:47802 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077001s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	[INFO] 10.244.0.3:45741 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166201s
	[INFO] 10.244.0.3:48683 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	[INFO] 10.244.0.3:47252 - 5 "PTR IN 1.240.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159702s
	
	
	==> describe nodes <==
	Name:               multinode-515700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_25_13_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:53:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:50:41 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:50:41 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:50:41 +0000   Mon, 29 Apr 2024 20:25:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:50:41 +0000   Mon, 29 Apr 2024 20:25:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.241.25
	  Hostname:    multinode-515700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc8de88647d944658545c7ae4a702aea
	  System UUID:                68adc21b-67d2-5446-9537-0dea9fd880a0
	  Boot ID:                    9507eca5-5f1f-4862-974e-a61fb27048d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dv5v8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-drcsj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-515700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-lt84t                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-515700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-multinode-515700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-6gx5x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-515700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node multinode-515700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node multinode-515700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node multinode-515700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m                node-controller  Node multinode-515700 event: Registered Node multinode-515700 in Controller
	  Normal  NodeReady                27m                kubelet          Node multinode-515700 status is now: NodeReady
	
	
	Name:               multinode-515700-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-515700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-515700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T20_46_05_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:46:05 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-515700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:49:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:49:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:49:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:49:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 20:46:35 +0000   Mon, 29 Apr 2024 20:49:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.240.210
	  Hostname:    multinode-515700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cba11e160ba341e08600b430623543e3
	  System UUID:                c93866d4-f3c2-8b4a-808f-8a49ef3473c2
	  Boot ID:                    eca6382a-2500-4a1e-9ddd-477f0ebe0910
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2t4c2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-svhl6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m14s
	  kube-system                 kube-proxy-ds5fx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  7m14s (x2 over 7m15s)  kubelet          Node multinode-515700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m14s (x2 over 7m15s)  kubelet          Node multinode-515700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m14s (x2 over 7m15s)  kubelet          Node multinode-515700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m10s                  node-controller  Node multinode-515700-m03 event: Registered Node multinode-515700-m03 in Controller
	  Normal  NodeReady                6m51s                  kubelet          Node multinode-515700-m03 status is now: NodeReady
	  Normal  NodeNotReady             3m30s                  node-controller  Node multinode-515700-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 20:24] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.212417] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +31.830340] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.112166] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.613568] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.218400] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.259380] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.863180] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.213718] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.233297] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.301716] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.953055] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.129851] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.793087] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[Apr29 20:25] systemd-fstab-generator[1710]: Ignoring "noauto" option for root device
	[  +0.110579] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.112113] systemd-fstab-generator[2108]: Ignoring "noauto" option for root device
	[  +0.165104] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.220827] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.255309] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.248279] kauditd_printk_skb: 51 callbacks suppressed
	[Apr29 20:26] hrtimer: interrupt took 3466547 ns
	[Apr29 20:29] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [01f30fac305b] <==
	{"level":"info","ts":"2024-04-29T20:45:58.717909Z","caller":"traceutil/trace.go:171","msg":"trace[259978277] transaction","detail":"{read_only:false; response_revision:1454; number_of_response:1; }","duration":"179.638307ms","start":"2024-04-29T20:45:58.538241Z","end":"2024-04-29T20:45:58.71788Z","steps":["trace[259978277] 'process raft request'  (duration: 179.431405ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:45:58.85575Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.622912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:45:58.855965Z","caller":"traceutil/trace.go:171","msg":"trace[1396568622] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1454; }","duration":"115.880014ms","start":"2024-04-29T20:45:58.74007Z","end":"2024-04-29T20:45:58.85595Z","steps":["trace[1396568622] 'range keys from in-memory index tree'  (duration: 115.547212ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:09.855862Z","caller":"traceutil/trace.go:171","msg":"trace[811401261] transaction","detail":"{read_only:false; response_revision:1495; number_of_response:1; }","duration":"102.190223ms","start":"2024-04-29T20:46:09.753656Z","end":"2024-04-29T20:46:09.855846Z","steps":["trace[811401261] 'process raft request'  (duration: 102.095822ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:10.071953Z","caller":"traceutil/trace.go:171","msg":"trace[1996796465] transaction","detail":"{read_only:false; response_revision:1496; number_of_response:1; }","duration":"300.29343ms","start":"2024-04-29T20:46:09.77164Z","end":"2024-04-29T20:46:10.071933Z","steps":["trace[1996796465] 'process raft request'  (duration: 295.855603ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:46:10.072618Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:09.771623Z","time spent":"300.479031ms","remote":"127.0.0.1:50854","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2962,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-515700-m03\" mod_revision:1487 > success:<request_put:<key:\"/registry/minions/multinode-515700-m03\" value_size:2916 >> failure:<request_range:<key:\"/registry/minions/multinode-515700-m03\" > >"}
	{"level":"info","ts":"2024-04-29T20:46:15.569199Z","caller":"traceutil/trace.go:171","msg":"trace[1643861658] transaction","detail":"{read_only:false; response_revision:1503; number_of_response:1; }","duration":"218.350023ms","start":"2024-04-29T20:46:15.350828Z","end":"2024-04-29T20:46:15.569178Z","steps":["trace[1643861658] 'process raft request'  (duration: 218.141522ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:46:15.960586Z","caller":"traceutil/trace.go:171","msg":"trace[1497086569] linearizableReadLoop","detail":"{readStateIndex:1774; appliedIndex:1773; }","duration":"367.734728ms","start":"2024-04-29T20:46:15.592832Z","end":"2024-04-29T20:46:15.960567Z","steps":["trace[1497086569] 'read index received'  (duration: 332.248313ms)","trace[1497086569] 'applied index is now lower than readState.Index'  (duration: 35.485815ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T20:46:15.960951Z","caller":"traceutil/trace.go:171","msg":"trace[818980090] transaction","detail":"{read_only:false; response_revision:1504; number_of_response:1; }","duration":"594.879604ms","start":"2024-04-29T20:46:15.36606Z","end":"2024-04-29T20:46:15.96094Z","steps":["trace[818980090] 'process raft request'  (duration: 559.784592ms)","trace[818980090] 'compare'  (duration: 34.64431ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:46:15.961608Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:15.366043Z","time spent":"594.957105ms","remote":"127.0.0.1:50958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":569,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" mod_revision:1486 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" value_size:508 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-515700-m03\" > >"}
	{"level":"warn","ts":"2024-04-29T20:46:15.962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"369.162137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-515700-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-04-29T20:46:15.96206Z","caller":"traceutil/trace.go:171","msg":"trace[601879282] range","detail":"{range_begin:/registry/minions/multinode-515700-m03; range_end:; response_count:1; response_revision:1504; }","duration":"369.225137ms","start":"2024-04-29T20:46:15.592827Z","end":"2024-04-29T20:46:15.962052Z","steps":["trace[601879282] 'agreement among raft nodes before linearized reading'  (duration: 369.135436ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:46:15.962525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:46:15.592782Z","time spent":"369.464038ms","remote":"127.0.0.1:50854","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3172,"request content":"key:\"/registry/minions/multinode-515700-m03\" "}
	{"level":"warn","ts":"2024-04-29T20:46:15.962622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.652243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:46:15.962781Z","caller":"traceutil/trace.go:171","msg":"trace[632284179] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1504; }","duration":"221.955444ms","start":"2024-04-29T20:46:15.740814Z","end":"2024-04-29T20:46:15.962769Z","steps":["trace[632284179] 'agreement among raft nodes before linearized reading'  (duration: 221.659043ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:49:34.961477Z","caller":"traceutil/trace.go:171","msg":"trace[502856506] linearizableReadLoop","detail":"{readStateIndex:2019; appliedIndex:2018; }","duration":"247.093192ms","start":"2024-04-29T20:49:34.714363Z","end":"2024-04-29T20:49:34.961457Z","steps":["trace[502856506] 'read index received'  (duration: 246.857491ms)","trace[502856506] 'applied index is now lower than readState.Index'  (duration: 235.101µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:49:34.961633Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.382193ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-29T20:49:34.961717Z","caller":"traceutil/trace.go:171","msg":"trace[601185574] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; response_count:0; response_revision:1707; }","duration":"247.481994ms","start":"2024-04-29T20:49:34.714192Z","end":"2024-04-29T20:49:34.961674Z","steps":["trace[601185574] 'agreement among raft nodes before linearized reading'  (duration: 247.359693ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:49:34.962068Z","caller":"traceutil/trace.go:171","msg":"trace[1359928624] transaction","detail":"{read_only:false; response_revision:1707; number_of_response:1; }","duration":"335.041251ms","start":"2024-04-29T20:49:34.627013Z","end":"2024-04-29T20:49:34.962054Z","steps":["trace[1359928624] 'process raft request'  (duration: 334.263847ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:49:34.962372Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:49:34.627001Z","time spent":"335.313352ms","remote":"127.0.0.1:50852","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1705 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-29T20:49:36.278626Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.337569ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14852747513224610764 > lease_revoke:<id:4e1f8f2b8851c396>","response":"size:28"}
	{"level":"info","ts":"2024-04-29T20:49:37.084787Z","caller":"traceutil/trace.go:171","msg":"trace[1339822422] transaction","detail":"{read_only:false; response_revision:1708; number_of_response:1; }","duration":"112.564787ms","start":"2024-04-29T20:49:36.9722Z","end":"2024-04-29T20:49:37.084765Z","steps":["trace[1339822422] 'process raft request'  (duration: 112.352586ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:50:06.320544Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1410}
	{"level":"info","ts":"2024-04-29T20:50:06.329963Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1410,"took":"8.848946ms","hash":1297927457,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1785856,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-29T20:50:06.330194Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1297927457,"revision":1410,"compact-revision":1169}
	
	
	==> kernel <==
	 20:53:19 up 30 min,  0 users,  load average: 0.25, 0.50, 0.47
	Linux multinode-515700 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11141cf0a01e] <==
	I0429 20:52:17.056625       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:52:27.067826       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:52:27.067944       1 main.go:227] handling current node
	I0429 20:52:27.067960       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:52:27.067969       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:52:37.084333       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:52:37.084580       1 main.go:227] handling current node
	I0429 20:52:37.084650       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:52:37.084685       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:52:47.100181       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:52:47.100230       1 main.go:227] handling current node
	I0429 20:52:47.100244       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:52:47.100292       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:52:57.107533       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:52:57.107574       1 main.go:227] handling current node
	I0429 20:52:57.107586       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:52:57.107593       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:53:07.122629       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:53:07.127079       1 main.go:227] handling current node
	I0429 20:53:07.127095       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:53:07.127105       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	I0429 20:53:17.133694       1 main.go:223] Handling node with IPs: map[172.17.241.25:{}]
	I0429 20:53:17.133842       1 main.go:227] handling current node
	I0429 20:53:17.133857       1 main.go:223] Handling node with IPs: map[172.17.240.210:{}]
	I0429 20:53:17.133866       1 main.go:250] Node multinode-515700-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9b9ad8fbed85] <==
	I0429 20:25:08.456691       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 20:25:09.052862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 20:25:09.062497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 20:25:09.063038       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 20:25:10.434046       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 20:25:10.531926       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 20:25:10.667114       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 20:25:10.682682       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.241.25]
	I0429 20:25:10.685084       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 20:25:10.705095       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 20:25:11.202529       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 20:25:11.660474       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 20:25:11.702512       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 20:25:11.739640       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 20:25:25.195544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 20:25:25.294821       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 20:41:45.603992       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54600: use of closed network connection
	E0429 20:41:46.683622       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54606: use of closed network connection
	E0429 20:41:47.742503       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54616: use of closed network connection
	E0429 20:42:24.359204       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54636: use of closed network connection
	E0429 20:42:34.907983       1 conn.go:339] Error on socket receive: read tcp 172.17.241.25:8443->172.17.240.1:54638: use of closed network connection
	I0429 20:46:15.963628       1 trace.go:236] Trace[1378232527]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:bc84c8cc-c1e5-4f4d-8a1c-4ed7b226292a,client:172.17.240.210,api-group:coordination.k8s.io,api-version:v1,name:multinode-515700-m03,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-515700-m03,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (29-Apr-2024 20:46:15.363) (total time: 599ms):
	Trace[1378232527]: ["GuaranteedUpdate etcd3" audit-id:bc84c8cc-c1e5-4f4d-8a1c-4ed7b226292a,key:/leases/kube-node-lease/multinode-515700-m03,type:*coordination.Lease,resource:leases.coordination.k8s.io 599ms (20:46:15.364)
	Trace[1378232527]:  ---"Txn call completed" 598ms (20:46:15.963)]
	Trace[1378232527]: [599.725533ms] [599.725533ms] END
	
	
	==> kube-controller-manager [c5de44f1f106] <==
	I0429 20:25:25.820241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.613668ms"
	I0429 20:25:25.820606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.801µs"
	I0429 20:25:26.647122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.452819ms"
	I0429 20:25:26.673190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.454556ms"
	I0429 20:25:26.673366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.301µs"
	I0429 20:25:35.442523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48µs"
	I0429 20:25:35.504302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.901µs"
	I0429 20:25:37.519404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.21268ms"
	I0429 20:25:37.519516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.698µs"
	I0429 20:25:39.495810       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 20:29:47.937478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.419556ms"
	I0429 20:29:47.961915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.36964ms"
	I0429 20:29:47.962862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.499µs"
	I0429 20:29:52.098445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.730146ms"
	I0429 20:29:52.098921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.902µs"
	I0429 20:46:05.025369       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-515700-m03\" does not exist"
	I0429 20:46:05.038750       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-515700-m03" podCIDRs=["10.244.1.0/24"]
	I0429 20:46:09.749698       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-515700-m03"
	I0429 20:46:28.280618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-515700-m03"
	I0429 20:46:28.324633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.8µs"
	I0429 20:46:28.354027       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.9µs"
	I0429 20:46:31.239793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.942065ms"
	I0429 20:46:31.240386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="306.702µs"
	I0429 20:49:49.871652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.942339ms"
	I0429 20:49:49.876024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.5µs"
	
	
	==> kube-proxy [8d116812e2fa] <==
	I0429 20:25:27.278575       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:25:27.322396       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.241.25"]
	I0429 20:25:27.381777       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:25:27.381896       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:25:27.381924       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:25:27.389649       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:25:27.392153       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:25:27.392448       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:25:27.396161       1 config.go:192] "Starting service config controller"
	I0429 20:25:27.396372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:25:27.396564       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:25:27.396976       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:25:27.399035       1 config.go:319] "Starting node config controller"
	I0429 20:25:27.399236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:25:27.497521       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:25:27.497518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:25:27.500527       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7748681b165f] <==
	W0429 20:25:09.310708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.311983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.372121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.372287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.389043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:25:09.389975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 20:25:09.402308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.402357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.414781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:25:09.414997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:25:09.463545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:25:09.463684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:25:09.473360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 20:25:09.473524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 20:25:09.538214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 20:25:09.538587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 20:25:09.595918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:25:09.596510       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:25:09.751697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 20:25:09.752615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 20:25:09.794103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:25:09.794595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 20:25:09.800334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 20:25:09.800494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 20:25:11.092300       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:49:11 multinode-515700 kubelet[2116]: E0429 20:49:11.923081    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:49:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:49:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:49:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:49:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:50:11 multinode-515700 kubelet[2116]: E0429 20:50:11.923459    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:50:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:50:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:50:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:50:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:51:11 multinode-515700 kubelet[2116]: E0429 20:51:11.925624    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:51:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:51:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:51:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:51:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:52:11 multinode-515700 kubelet[2116]: E0429 20:52:11.923106    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:52:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:52:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:52:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:52:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:53:11 multinode-515700 kubelet[2116]: E0429 20:53:11.922586    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:53:11 multinode-515700 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:53:11 multinode-515700 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:53:11 multinode-515700 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:53:11 multinode-515700 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:53:11.506937    1364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-515700 -n multinode-515700: (12.3259424s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-515700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (147.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (1101.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-262400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-262400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (8m28.6324061s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-262400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "kubernetes-upgrade-262400" primary control-plane node in "kubernetes-upgrade-262400" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:10:30.729702    8764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 21:10:30.873567    8764 out.go:291] Setting OutFile to fd 868 ...
	I0429 21:10:30.874567    8764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 21:10:30.874567    8764 out.go:304] Setting ErrFile to fd 1184...
	I0429 21:10:30.874567    8764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 21:10:30.900859    8764 out.go:298] Setting JSON to false
	I0429 21:10:30.906797    8764 start.go:129] hostinfo: {"hostname":"minikube6","uptime":26970,"bootTime":1714398060,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 21:10:30.906797    8764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 21:10:30.911630    8764 out.go:177] * [kubernetes-upgrade-262400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 21:10:30.919540    8764 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 21:10:30.917620    8764 notify.go:220] Checking for updates...
	I0429 21:10:30.925538    8764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 21:10:30.934993    8764 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 21:10:30.942992    8764 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 21:10:30.950274    8764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 21:10:30.957999    8764 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 21:10:30.958718    8764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 21:10:37.653154    8764 out.go:177] * Using the hyperv driver based on user configuration
	I0429 21:10:37.657065    8764 start.go:297] selected driver: hyperv
	I0429 21:10:37.657065    8764 start.go:901] validating driver "hyperv" against <nil>
	I0429 21:10:37.657065    8764 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 21:10:37.721615    8764 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 21:10:37.722603    8764 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 21:10:37.722603    8764 cni.go:84] Creating CNI manager for ""
	I0429 21:10:37.722603    8764 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 21:10:37.723732    8764 start.go:340] cluster config:
	{Name:kubernetes-upgrade-262400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-262400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0429 21:10:37.723732    8764 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 21:10:37.727057    8764 out.go:177] * Starting "kubernetes-upgrade-262400" primary control-plane node in "kubernetes-upgrade-262400" cluster
	I0429 21:10:37.733277    8764 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 21:10:37.733516    8764 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 21:10:37.733516    8764 cache.go:56] Caching tarball of preloaded images
	I0429 21:10:37.734203    8764 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 21:10:37.734387    8764 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 21:10:37.734691    8764 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-262400\config.json ...
	I0429 21:10:37.734989    8764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-262400\config.json: {Name:mk968d8a57ec925cc13441846936b9fab66dafdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 21:10:37.735979    8764 start.go:360] acquireMachinesLock for kubernetes-upgrade-262400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 21:15:21.446479    8764 start.go:364] duration metric: took 4m43.7082509s to acquireMachinesLock for "kubernetes-upgrade-262400"
	I0429 21:15:21.446817    8764 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-262400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-262400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 21:15:21.447164    8764 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 21:15:21.453164    8764 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 21:15:21.453620    8764 start.go:159] libmachine.API.Create for "kubernetes-upgrade-262400" (driver="hyperv")
	I0429 21:15:21.453674    8764 client.go:168] LocalClient.Create starting
	I0429 21:15:21.453674    8764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 21:15:21.454415    8764 main.go:141] libmachine: Decoding PEM data...
	I0429 21:15:21.454415    8764 main.go:141] libmachine: Parsing certificate...
	I0429 21:15:21.454502    8764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 21:15:21.454502    8764 main.go:141] libmachine: Decoding PEM data...
	I0429 21:15:21.454502    8764 main.go:141] libmachine: Parsing certificate...
	I0429 21:15:21.454502    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 21:15:23.521917    8764 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 21:15:23.522175    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:23.522175    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 21:15:25.393274    8764 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 21:15:25.393481    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:25.393481    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 21:15:27.019963    8764 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 21:15:27.019963    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:27.020252    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 21:15:30.865199    8764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 21:15:30.865253    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:30.868028    8764 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 21:15:31.392569    8764 main.go:141] libmachine: Creating SSH key...
	I0429 21:15:32.052951    8764 main.go:141] libmachine: Creating VM...
	I0429 21:15:32.052951    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 21:15:35.337278    8764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 21:15:35.337792    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:35.337862    8764 main.go:141] libmachine: Using switch "Default Switch"
	I0429 21:15:35.337941    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 21:15:37.230933    8764 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 21:15:37.230933    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:37.230933    8764 main.go:141] libmachine: Creating VHD
	I0429 21:15:37.230933    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 21:15:41.250767    8764 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5B9CC23B-1BF4-4466-ABB6-E9EABB718BAC
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 21:15:41.250873    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:41.250873    8764 main.go:141] libmachine: Writing magic tar header
	I0429 21:15:41.250989    8764 main.go:141] libmachine: Writing SSH key tar header
	I0429 21:15:41.261509    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 21:15:44.561212    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:15:44.562513    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:44.562595    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\disk.vhd' -SizeBytes 20000MB
	I0429 21:15:47.266961    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:15:47.266961    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:47.266961    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kubernetes-upgrade-262400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 21:15:52.493275    8764 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	kubernetes-upgrade-262400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 21:15:52.493275    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:52.493275    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kubernetes-upgrade-262400 -DynamicMemoryEnabled $false
	I0429 21:15:54.892592    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:15:54.892592    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:54.893214    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kubernetes-upgrade-262400 -Count 2
	I0429 21:15:57.235436    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:15:57.235527    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:57.235527    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kubernetes-upgrade-262400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\boot2docker.iso'
	I0429 21:15:59.891036    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:15:59.891036    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:15:59.891036    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kubernetes-upgrade-262400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\disk.vhd'
	I0429 21:16:02.602751    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:16:02.602751    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:02.602751    8764 main.go:141] libmachine: Starting VM...
	I0429 21:16:02.602751    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-262400
	I0429 21:16:05.733376    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:16:05.734376    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:05.734376    8764 main.go:141] libmachine: Waiting for host to start...
	I0429 21:16:05.734376    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:08.051500    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:08.051500    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:08.051500    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:16:10.654141    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:16:10.654213    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:11.664405    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:13.882102    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:13.882102    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:13.882682    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:16:16.481419    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:16:16.481419    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:17.482627    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:19.717865    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:19.717865    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:19.717865    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:16:22.307756    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:16:22.307931    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:23.321064    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:25.586141    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:25.586141    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:25.587171    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:16:28.462109    8764 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:16:28.462109    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:29.476983    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:32.043627    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:32.043627    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:32.043744    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:16:35.050432    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:16:35.050432    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:35.051649    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:37.299217    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:37.299217    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:37.299299    8764 machine.go:94] provisionDockerMachine start ...
	I0429 21:16:37.299379    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:39.512233    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:39.512233    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:39.513233    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:16:42.175334    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:16:42.175334    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:42.182060    8764 main.go:141] libmachine: Using SSH client type: native
	I0429 21:16:42.182631    8764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.254.212 22 <nil> <nil>}
	I0429 21:16:42.182631    8764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 21:16:42.319359    8764 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 21:16:42.319470    8764 buildroot.go:166] provisioning hostname "kubernetes-upgrade-262400"
	I0429 21:16:42.319470    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:44.486946    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:44.486946    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:44.487941    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:16:47.110054    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:16:47.110054    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:47.119001    8764 main.go:141] libmachine: Using SSH client type: native
	I0429 21:16:47.119614    8764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.254.212 22 <nil> <nil>}
	I0429 21:16:47.119614    8764 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-262400 && echo "kubernetes-upgrade-262400" | sudo tee /etc/hostname
	I0429 21:16:47.300871    8764 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-262400
	
	I0429 21:16:47.300871    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:49.489603    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:49.489992    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:49.490067    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:16:52.150083    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:16:52.150083    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:52.157424    8764 main.go:141] libmachine: Using SSH client type: native
	I0429 21:16:52.158350    8764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.254.212 22 <nil> <nil>}
	I0429 21:16:52.158350    8764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-262400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-262400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-262400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 21:16:52.312785    8764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 21:16:52.312785    8764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 21:16:52.313323    8764 buildroot.go:174] setting up certificates
	I0429 21:16:52.313323    8764 provision.go:84] configureAuth start
	I0429 21:16:52.313430    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:54.546500    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:54.547160    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:54.547242    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:16:57.171699    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:16:57.171699    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:57.172703    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:16:59.350555    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:16:59.351613    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:16:59.351669    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:02.023320    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:02.023596    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:02.023596    8764 provision.go:143] copyHostCerts
	I0429 21:17:02.024146    8764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 21:17:02.024146    8764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 21:17:02.024300    8764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 21:17:02.026211    8764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 21:17:02.026211    8764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 21:17:02.026833    8764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 21:17:02.028251    8764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 21:17:02.028251    8764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 21:17:02.028251    8764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 21:17:02.029736    8764 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-262400 san=[127.0.0.1 172.17.254.212 kubernetes-upgrade-262400 localhost minikube]
	I0429 21:17:02.530083    8764 provision.go:177] copyRemoteCerts
	I0429 21:17:02.543272    8764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 21:17:02.543272    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:04.755174    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:04.755174    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:04.755174    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:07.471133    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:07.471961    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:07.472279    8764 sshutil.go:53] new ssh client: &{IP:172.17.254.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\id_rsa Username:docker}
	I0429 21:17:07.581887    8764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0385759s)
	I0429 21:17:07.582469    8764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 21:17:07.643128    8764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0429 21:17:07.698156    8764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 21:17:07.753964    8764 provision.go:87] duration metric: took 15.4405221s to configureAuth
	I0429 21:17:07.753964    8764 buildroot.go:189] setting minikube options for container-runtime
	I0429 21:17:07.753964    8764 config.go:182] Loaded profile config "kubernetes-upgrade-262400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0429 21:17:07.753964    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:10.008164    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:10.008164    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:10.008164    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:12.733632    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:12.733632    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:12.739728    8764 main.go:141] libmachine: Using SSH client type: native
	I0429 21:17:12.740610    8764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.254.212 22 <nil> <nil>}
	I0429 21:17:12.740610    8764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 21:17:12.892681    8764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 21:17:12.892681    8764 buildroot.go:70] root file system type: tmpfs
	I0429 21:17:12.892681    8764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 21:17:12.893215    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:15.118668    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:15.118668    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:15.118668    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:17.808275    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:17.808356    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:17.815645    8764 main.go:141] libmachine: Using SSH client type: native
	I0429 21:17:17.816533    8764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.254.212 22 <nil> <nil>}
	I0429 21:17:17.816533    8764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 21:17:17.998832    8764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 21:17:17.998832    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:20.170555    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:20.171536    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:20.171596    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:22.858650    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:22.858650    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:22.867877    8764 main.go:141] libmachine: Using SSH client type: native
	I0429 21:17:22.868707    8764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.254.212 22 <nil> <nil>}
	I0429 21:17:22.868707    8764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 21:17:25.159354    8764 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 21:17:25.159354    8764 machine.go:97] duration metric: took 47.8596862s to provisionDockerMachine
	I0429 21:17:25.159354    8764 client.go:171] duration metric: took 2m3.7047231s to LocalClient.Create
	I0429 21:17:25.159354    8764 start.go:167] duration metric: took 2m3.7048007s to libmachine.API.Create "kubernetes-upgrade-262400"
	I0429 21:17:25.159354    8764 start.go:293] postStartSetup for "kubernetes-upgrade-262400" (driver="hyperv")
	I0429 21:17:25.159354    8764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 21:17:25.179291    8764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 21:17:25.179291    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:27.388706    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:27.388706    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:27.388923    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:30.102672    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:30.103425    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:30.103425    8764 sshutil.go:53] new ssh client: &{IP:172.17.254.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\id_rsa Username:docker}
	I0429 21:17:30.209905    8764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0305759s)
	I0429 21:17:30.225260    8764 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 21:17:30.233840    8764 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 21:17:30.233840    8764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 21:17:30.234598    8764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 21:17:30.235830    8764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 21:17:30.249937    8764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 21:17:30.270413    8764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 21:17:30.323066    8764 start.go:296] duration metric: took 5.1636733s for postStartSetup
	I0429 21:17:30.326627    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:32.538736    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:32.538926    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:32.538926    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:35.218469    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:35.218469    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:35.219479    8764 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-262400\config.json ...
	I0429 21:17:35.222658    8764 start.go:128] duration metric: took 2m13.7744613s to createHost
	I0429 21:17:35.222658    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:37.427200    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:37.427200    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:37.427548    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:40.140742    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:40.140742    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:40.148005    8764 main.go:141] libmachine: Using SSH client type: native
	I0429 21:17:40.148578    8764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.254.212 22 <nil> <nil>}
	I0429 21:17:40.148578    8764 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 21:17:40.287917    8764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714425460.286721614
	
	I0429 21:17:40.288067    8764 fix.go:216] guest clock: 1714425460.286721614
	I0429 21:17:40.288067    8764 fix.go:229] Guest: 2024-04-29 21:17:40.286721614 +0000 UTC Remote: 2024-04-29 21:17:35.2226589 +0000 UTC m=+424.619260101 (delta=5.064062714s)
	I0429 21:17:40.288264    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:42.479095    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:42.479095    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:42.479258    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:45.190098    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:45.190098    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:45.197276    8764 main.go:141] libmachine: Using SSH client type: native
	I0429 21:17:45.197834    8764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.254.212 22 <nil> <nil>}
	I0429 21:17:45.198043    8764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714425460
	I0429 21:17:45.344128    8764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 21:17:40 UTC 2024
	
	I0429 21:17:45.344128    8764 fix.go:236] clock set: Mon Apr 29 21:17:40 UTC 2024
	 (err=<nil>)
	I0429 21:17:45.344128    8764 start.go:83] releasing machines lock for "kubernetes-upgrade-262400", held for 2m23.8965384s
	I0429 21:17:45.344676    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:47.667114    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:47.667175    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:47.667175    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:50.325801    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:50.325801    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:50.332006    8764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 21:17:50.332167    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:50.345695    8764 ssh_runner.go:195] Run: cat /version.json
	I0429 21:17:50.345695    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:17:52.706890    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:52.706890    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:52.707246    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:52.765508    8764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:17:52.765603    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:52.765643    8764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:17:55.539351    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:55.539429    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:55.540120    8764 sshutil.go:53] new ssh client: &{IP:172.17.254.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\id_rsa Username:docker}
	I0429 21:17:55.584960    8764 main.go:141] libmachine: [stdout =====>] : 172.17.254.212
	
	I0429 21:17:55.584960    8764 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:17:55.585410    8764 sshutil.go:53] new ssh client: &{IP:172.17.254.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\id_rsa Username:docker}
	I0429 21:17:55.635967    8764 ssh_runner.go:235] Completed: cat /version.json: (5.2902321s)
	I0429 21:17:55.650162    8764 ssh_runner.go:195] Run: systemctl --version
	I0429 21:17:55.719854    8764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3877233s)
	I0429 21:17:55.733868    8764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 21:17:55.743695    8764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 21:17:55.759161    8764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0429 21:17:55.794314    8764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0429 21:17:55.829084    8764 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 21:17:55.829084    8764 start.go:494] detecting cgroup driver to use...
	I0429 21:17:55.829667    8764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 21:17:55.881278    8764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0429 21:17:56.064415    8764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 21:17:56.088390    8764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 21:17:56.104556    8764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 21:17:56.141092    8764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 21:17:56.183304    8764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 21:17:56.221477    8764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 21:17:56.255946    8764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 21:17:56.292285    8764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 21:17:56.330066    8764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 21:17:56.366249    8764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 21:17:56.400032    8764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 21:17:56.625931    8764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 21:17:56.667787    8764 start.go:494] detecting cgroup driver to use...
	I0429 21:17:56.684791    8764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 21:17:56.728791    8764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 21:17:56.769797    8764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 21:17:56.830408    8764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 21:17:56.872407    8764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 21:17:56.923534    8764 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 21:17:57.002230    8764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 21:17:57.032401    8764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 21:17:57.084773    8764 ssh_runner.go:195] Run: which cri-dockerd
	I0429 21:17:57.108715    8764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 21:17:57.130881    8764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 21:17:57.183346    8764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 21:17:57.435632    8764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 21:17:57.647817    8764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 21:17:57.648149    8764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 21:17:57.719488    8764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 21:17:57.952084    8764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 21:18:59.108162    8764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1556158s)
	I0429 21:18:59.125396    8764 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 21:18:59.164190    8764 out.go:177] 
	W0429 21:18:59.167215    8764 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 21:17:23 kubernetes-upgrade-262400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:23.552025182Z" level=info msg="Starting up"
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:23.553066282Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:23.554476381Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.590622275Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624203268Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624297968Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624384168Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624403668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624579368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624599368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624875468Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.625010868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.625035368Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.625048368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.625155768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.625564568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.628887868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629074368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629353868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629460368Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629662567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629823567Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629972967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.654726263Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.654843163Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.655043263Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.655085063Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.655114663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.655328063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.655898263Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656131063Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656154763Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656176163Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656193463Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656234463Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656258563Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656279463Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656300663Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656318463Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656593863Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656857262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.657397162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.657724662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.657915562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.658244962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.658471962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659196662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659253062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659307862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659376662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659402462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659423562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659445962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659551162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659605962Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659667062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659703462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659723062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659781362Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659808562Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659827662Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659845262Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.660110362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.660141262Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.660162162Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.661142562Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.661318862Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.661460362Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.661615762Z" level=info msg="containerd successfully booted in 0.073139s"
	Apr 29 21:17:24 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:24.635960290Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:17:24 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:24.674303183Z" level=info msg="Loading containers: start."
	Apr 29 21:17:25 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:25.001168199Z" level=info msg="Loading containers: done."
	Apr 29 21:17:25 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:25.029137613Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:17:25 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:25.029387931Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:17:25 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:25.154447235Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:17:25 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:25.155014776Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:17:25 kubernetes-upgrade-262400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:17:57 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:57.979856988Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:17:57 kubernetes-upgrade-262400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:17:57 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:57.982511006Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:17:57 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:57.983314311Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:17:57 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:57.983374411Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:17:57 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:57.983403312Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:17:58 kubernetes-upgrade-262400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:17:58 kubernetes-upgrade-262400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:17:59 kubernetes-upgrade-262400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:17:59 kubernetes-upgrade-262400 dockerd[1097]: time="2024-04-29T21:17:59.076775869Z" level=info msg="Starting up"
	Apr 29 21:18:59 kubernetes-upgrade-262400 dockerd[1097]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 21:18:59 kubernetes-upgrade-262400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 21:18:59 kubernetes-upgrade-262400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 21:18:59 kubernetes-upgrade-262400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 21:17:23 kubernetes-upgrade-262400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:23.552025182Z" level=info msg="Starting up"
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:23.553066282Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:23.554476381Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.590622275Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624203268Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624297968Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624384168Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624403668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624579368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624599368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.624875468Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.625010868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.625035368Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.625048368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.625155768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.625564568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.628887868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629074368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629353868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629460368Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629662567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629823567Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.629972967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.654726263Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.654843163Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.655043263Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.655085063Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.655114663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.655328063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.655898263Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656131063Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656154763Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656176163Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656193463Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656234463Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656258563Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656279463Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656300663Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656318463Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656593863Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.656857262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.657397162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.657724662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.657915562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.658244962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.658471962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659196662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659253062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659307862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659376662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659402462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659423562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659445962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659551162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659605962Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659667062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659703462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659723062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659781362Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659808562Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659827662Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.659845262Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.660110362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.660141262Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.660162162Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.661142562Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.661318862Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.661460362Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:17:23 kubernetes-upgrade-262400 dockerd[665]: time="2024-04-29T21:17:23.661615762Z" level=info msg="containerd successfully booted in 0.073139s"
	Apr 29 21:17:24 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:24.635960290Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:17:24 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:24.674303183Z" level=info msg="Loading containers: start."
	Apr 29 21:17:25 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:25.001168199Z" level=info msg="Loading containers: done."
	Apr 29 21:17:25 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:25.029137613Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:17:25 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:25.029387931Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:17:25 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:25.154447235Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:17:25 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:25.155014776Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:17:25 kubernetes-upgrade-262400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:17:57 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:57.979856988Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:17:57 kubernetes-upgrade-262400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:17:57 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:57.982511006Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:17:57 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:57.983314311Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:17:57 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:57.983374411Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:17:57 kubernetes-upgrade-262400 dockerd[659]: time="2024-04-29T21:17:57.983403312Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:17:58 kubernetes-upgrade-262400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:17:58 kubernetes-upgrade-262400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:17:59 kubernetes-upgrade-262400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:17:59 kubernetes-upgrade-262400 dockerd[1097]: time="2024-04-29T21:17:59.076775869Z" level=info msg="Starting up"
	Apr 29 21:18:59 kubernetes-upgrade-262400 dockerd[1097]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 21:18:59 kubernetes-upgrade-262400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 21:18:59 kubernetes-upgrade-262400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 21:18:59 kubernetes-upgrade-262400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 21:18:59.168181    8764 out.go:239] * 
	* 
	W0429 21:18:59.169215    8764 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 21:18:59.174189    8764 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-262400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: exit status 90
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-262400
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-262400: (1m9.5266043s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-262400 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-262400 status --format={{.Host}}: exit status 7 (3.0716388s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:20:09.081167    3456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-262400 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
E0429 21:20:24.016968   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-262400 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (7m24.2657262s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-262400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "kubernetes-upgrade-262400" primary control-plane node in "kubernetes-upgrade-262400" cluster
	* Restarting existing hyperv VM for "kubernetes-upgrade-262400" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:20:12.161749   13616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 21:20:12.275075   13616 out.go:291] Setting OutFile to fd 856 ...
	I0429 21:20:12.276068   13616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 21:20:12.276068   13616 out.go:304] Setting ErrFile to fd 1224...
	I0429 21:20:12.276068   13616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 21:20:12.306067   13616 out.go:298] Setting JSON to false
	I0429 21:20:12.310061   13616 start.go:129] hostinfo: {"hostname":"minikube6","uptime":27551,"bootTime":1714398060,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 21:20:12.310061   13616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 21:20:12.433697   13616 out.go:177] * [kubernetes-upgrade-262400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 21:20:12.436702   13616 notify.go:220] Checking for updates...
	I0429 21:20:12.438698   13616 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 21:20:12.442443   13616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 21:20:12.444970   13616 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 21:20:12.447706   13616 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 21:20:12.450849   13616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 21:20:12.456596   13616 config.go:182] Loaded profile config "kubernetes-upgrade-262400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0429 21:20:12.457599   13616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 21:20:18.338635   13616 out.go:177] * Using the hyperv driver based on existing profile
	I0429 21:20:18.493781   13616 start.go:297] selected driver: hyperv
	I0429 21:20:18.493781   13616 start.go:901] validating driver "hyperv" against &{Name:kubernetes-upgrade-262400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-262400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.254.212 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 21:20:18.494373   13616 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 21:20:18.554259   13616 cni.go:84] Creating CNI manager for ""
	I0429 21:20:18.554259   13616 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 21:20:18.554803   13616 start.go:340] cluster config:
	{Name:kubernetes-upgrade-262400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-262400 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.254.212 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 21:20:18.555271   13616 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 21:20:18.675673   13616 out.go:177] * Starting "kubernetes-upgrade-262400" primary control-plane node in "kubernetes-upgrade-262400" cluster
	I0429 21:20:18.867575   13616 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 21:20:18.867575   13616 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 21:20:18.867575   13616 cache.go:56] Caching tarball of preloaded images
	I0429 21:20:18.868600   13616 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 21:20:18.868644   13616 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 21:20:18.868644   13616 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-262400\config.json ...
	I0429 21:20:18.871462   13616 start.go:360] acquireMachinesLock for kubernetes-upgrade-262400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 21:24:39.153415   13616 start.go:364] duration metric: took 4m20.2799779s to acquireMachinesLock for "kubernetes-upgrade-262400"
	I0429 21:24:39.153415   13616 start.go:96] Skipping create...Using existing machine configuration
	I0429 21:24:39.153415   13616 fix.go:54] fixHost starting: 
	I0429 21:24:39.154613   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:24:41.320121   13616 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 21:24:41.320287   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:24:41.320287   13616 fix.go:112] recreateIfNeeded on kubernetes-upgrade-262400: state=Stopped err=<nil>
	W0429 21:24:41.320287   13616 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 21:24:41.323852   13616 out.go:177] * Restarting existing hyperv VM for "kubernetes-upgrade-262400" ...
	I0429 21:24:41.326163   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-262400
	I0429 21:24:44.634773   13616 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:24:44.634773   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:24:44.634773   13616 main.go:141] libmachine: Waiting for host to start...
	I0429 21:24:44.634773   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:24:47.100544   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:24:47.100621   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:24:47.100720   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:24:49.824330   13616 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:24:49.824330   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:24:50.839263   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:24:53.062089   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:24:53.062904   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:24:53.062904   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:24:55.722326   13616 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:24:55.722450   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:24:56.736517   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:24:59.027197   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:24:59.027268   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:24:59.027268   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:01.718958   13616 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:25:01.719125   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:02.726861   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:04.987735   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:04.988052   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:04.988052   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:07.624335   13616 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:25:07.624335   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:08.629886   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:10.862529   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:10.863075   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:10.863143   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:13.583431   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:25:13.583431   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:13.588917   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:15.787560   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:15.787560   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:15.788209   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:18.441739   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:25:18.442645   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:18.442645   13616 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-262400\config.json ...
	I0429 21:25:18.446032   13616 machine.go:94] provisionDockerMachine start ...
	I0429 21:25:18.446138   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:20.606737   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:20.606737   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:20.607565   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:23.253477   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:25:23.253477   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:23.261116   13616 main.go:141] libmachine: Using SSH client type: native
	I0429 21:25:23.261628   13616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.1 22 <nil> <nil>}
	I0429 21:25:23.261628   13616 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 21:25:23.405149   13616 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 21:25:23.405149   13616 buildroot.go:166] provisioning hostname "kubernetes-upgrade-262400"
	I0429 21:25:23.405149   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:25.607902   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:25.608285   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:25.608285   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:28.311092   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:25:28.311092   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:28.318289   13616 main.go:141] libmachine: Using SSH client type: native
	I0429 21:25:28.319149   13616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.1 22 <nil> <nil>}
	I0429 21:25:28.319149   13616 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-262400 && echo "kubernetes-upgrade-262400" | sudo tee /etc/hostname
	I0429 21:25:28.494766   13616 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-262400
	
	I0429 21:25:28.494852   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:30.671668   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:30.671740   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:30.671844   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:33.394076   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:25:33.394076   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:33.401071   13616 main.go:141] libmachine: Using SSH client type: native
	I0429 21:25:33.401071   13616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.1 22 <nil> <nil>}
	I0429 21:25:33.401071   13616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-262400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-262400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-262400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 21:25:33.562824   13616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 21:25:33.562824   13616 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 21:25:33.562824   13616 buildroot.go:174] setting up certificates
	I0429 21:25:33.562824   13616 provision.go:84] configureAuth start
	I0429 21:25:33.562824   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:35.786732   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:35.787544   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:35.787544   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:38.459134   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:25:38.459134   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:38.459134   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:40.659511   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:40.659511   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:40.660594   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:43.342924   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:25:43.343813   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:43.343813   13616 provision.go:143] copyHostCerts
	I0429 21:25:43.344438   13616 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 21:25:43.344500   13616 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 21:25:43.344913   13616 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 21:25:43.345381   13616 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 21:25:43.345381   13616 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 21:25:43.346886   13616 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 21:25:43.348726   13616 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 21:25:43.348784   13616 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 21:25:43.348784   13616 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 21:25:43.349984   13616 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-262400 san=[127.0.0.1 172.17.253.1 kubernetes-upgrade-262400 localhost minikube]
	I0429 21:25:43.580755   13616 provision.go:177] copyRemoteCerts
	I0429 21:25:43.601020   13616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 21:25:43.601150   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:45.855028   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:45.855096   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:45.855300   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:48.577320   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:25:48.577320   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:48.577320   13616 sshutil.go:53] new ssh client: &{IP:172.17.253.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\id_rsa Username:docker}
	I0429 21:25:48.696960   13616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0958406s)
	I0429 21:25:48.697730   13616 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 21:25:48.751946   13616 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0429 21:25:48.812139   13616 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 21:25:48.875651   13616 provision.go:87] duration metric: took 15.312705s to configureAuth
	I0429 21:25:48.875651   13616 buildroot.go:189] setting minikube options for container-runtime
	I0429 21:25:48.875651   13616 config.go:182] Loaded profile config "kubernetes-upgrade-262400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 21:25:48.875651   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:51.070554   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:51.071001   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:51.071100   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:53.756327   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:25:53.756327   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:53.765913   13616 main.go:141] libmachine: Using SSH client type: native
	I0429 21:25:53.766844   13616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.1 22 <nil> <nil>}
	I0429 21:25:53.766844   13616 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 21:25:53.906474   13616 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 21:25:53.906474   13616 buildroot.go:70] root file system type: tmpfs
	I0429 21:25:53.908025   13616 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 21:25:53.908025   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:25:56.105185   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:25:56.105185   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:56.105185   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:25:58.760080   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:25:58.760080   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:25:58.772289   13616 main.go:141] libmachine: Using SSH client type: native
	I0429 21:25:58.772768   13616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.1 22 <nil> <nil>}
	I0429 21:25:58.773417   13616 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 21:25:58.950615   13616 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 21:25:58.950615   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:26:01.126620   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:26:01.126795   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:01.126864   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:26:03.803648   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:26:03.803648   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:03.810694   13616 main.go:141] libmachine: Using SSH client type: native
	I0429 21:26:03.810947   13616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.1 22 <nil> <nil>}
	I0429 21:26:03.810947   13616 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 21:26:06.133539   13616 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 21:26:06.133598   13616 machine.go:97] duration metric: took 47.687125s to provisionDockerMachine
	I0429 21:26:06.133652   13616 start.go:293] postStartSetup for "kubernetes-upgrade-262400" (driver="hyperv")
	I0429 21:26:06.133652   13616 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 21:26:06.146485   13616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 21:26:06.147533   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:26:08.389506   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:26:08.389506   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:08.389617   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:26:11.075445   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:26:11.075445   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:11.076489   13616 sshutil.go:53] new ssh client: &{IP:172.17.253.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\id_rsa Username:docker}
	I0429 21:26:11.192370   13616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0447962s)
	I0429 21:26:11.206281   13616 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 21:26:11.213678   13616 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 21:26:11.213678   13616 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 21:26:11.213678   13616 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 21:26:11.215083   13616 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 21:26:11.228849   13616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 21:26:11.249953   13616 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 21:26:11.301038   13616 start.go:296] duration metric: took 5.1673446s for postStartSetup
	I0429 21:26:11.301160   13616 fix.go:56] duration metric: took 1m32.1470142s for fixHost
	I0429 21:26:11.301247   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:26:13.481749   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:26:13.481749   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:13.481749   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:26:16.144185   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:26:16.145037   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:16.151769   13616 main.go:141] libmachine: Using SSH client type: native
	I0429 21:26:16.151769   13616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.1 22 <nil> <nil>}
	I0429 21:26:16.151769   13616 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 21:26:16.285733   13616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714425976.290595891
	
	I0429 21:26:16.285822   13616 fix.go:216] guest clock: 1714425976.290595891
	I0429 21:26:16.285822   13616 fix.go:229] Guest: 2024-04-29 21:26:16.290595891 +0000 UTC Remote: 2024-04-29 21:26:11.3011609 +0000 UTC m=+359.254310101 (delta=4.989434991s)
	I0429 21:26:16.285933   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:26:18.494142   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:26:18.494729   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:18.494729   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:26:21.197236   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:26:21.197236   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:21.204406   13616 main.go:141] libmachine: Using SSH client type: native
	I0429 21:26:21.205250   13616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.253.1 22 <nil> <nil>}
	I0429 21:26:21.205250   13616 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714425976
	I0429 21:26:21.356787   13616 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 21:26:16 UTC 2024
	
	I0429 21:26:21.356787   13616 fix.go:236] clock set: Mon Apr 29 21:26:16 UTC 2024
	 (err=<nil>)
	I0429 21:26:21.356787   13616 start.go:83] releasing machines lock for "kubernetes-upgrade-262400", held for 1m42.2025606s
	I0429 21:26:21.356787   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:26:23.603584   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:26:23.603584   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:23.604448   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:26:26.274419   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:26:26.274419   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:26.278427   13616 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 21:26:26.278427   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:26:26.291458   13616 ssh_runner.go:195] Run: cat /version.json
	I0429 21:26:26.291458   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-262400 ).state
	I0429 21:26:29.053552   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:26:29.053552   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:29.053552   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:26:29.101546   13616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:26:29.101669   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:29.101669   13616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-262400 ).networkadapters[0]).ipaddresses[0]
	I0429 21:26:32.642101   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:26:32.642101   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:32.643726   13616 sshutil.go:53] new ssh client: &{IP:172.17.253.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\id_rsa Username:docker}
	I0429 21:26:32.679973   13616 main.go:141] libmachine: [stdout =====>] : 172.17.253.1
	
	I0429 21:26:32.679973   13616 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:26:32.680994   13616 sshutil.go:53] new ssh client: &{IP:172.17.253.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-262400\id_rsa Username:docker}
	I0429 21:26:32.746457   13616 ssh_runner.go:235] Completed: cat /version.json: (6.4549475s)
	I0429 21:26:32.762894   13616 ssh_runner.go:195] Run: systemctl --version
	I0429 21:26:32.844806   13616 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (6.5663267s)
	I0429 21:26:32.861978   13616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 21:26:32.874988   13616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 21:26:32.892078   13616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0429 21:26:32.928538   13616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0429 21:26:32.973138   13616 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 21:26:32.973311   13616 start.go:494] detecting cgroup driver to use...
	I0429 21:26:32.973660   13616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 21:26:33.033263   13616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 21:26:33.071683   13616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 21:26:33.097029   13616 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 21:26:33.111444   13616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 21:26:33.150732   13616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 21:26:33.186896   13616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 21:26:33.226854   13616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 21:26:33.269730   13616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 21:26:33.309300   13616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 21:26:33.346628   13616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 21:26:33.381590   13616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 21:26:33.420391   13616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 21:26:33.459877   13616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 21:26:33.498022   13616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 21:26:33.724436   13616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 21:26:33.759326   13616 start.go:494] detecting cgroup driver to use...
	I0429 21:26:33.772432   13616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 21:26:33.814842   13616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 21:26:33.853686   13616 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 21:26:33.910709   13616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 21:26:33.957811   13616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 21:26:34.000782   13616 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 21:26:34.090909   13616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 21:26:34.119646   13616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 21:26:34.175549   13616 ssh_runner.go:195] Run: which cri-dockerd
	I0429 21:26:34.194566   13616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 21:26:34.220020   13616 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 21:26:34.271685   13616 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 21:26:34.505113   13616 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 21:26:34.725942   13616 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 21:26:34.726216   13616 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 21:26:34.779651   13616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 21:26:35.016890   13616 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 21:27:36.181654   13616 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1641734s)
	I0429 21:27:36.197173   13616 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 21:27:36.232803   13616 out.go:177] 
	W0429 21:27:36.236268   13616 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 21:26:04 kubernetes-upgrade-262400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:04.538568323Z" level=info msg="Starting up"
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:04.540122848Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:04.541443769Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.582077027Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.611649105Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.612116213Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.612408917Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.612536119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.613637337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.613885141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.614143245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.614256247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.614281248Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.614293748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.614988259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.615990575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.619152426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.619299229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.619520832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.619643134Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.620520949Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.620649951Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.620672251Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.622724384Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.622890187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.622924587Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.622943688Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.622959288Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623036289Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623488097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623608798Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623718500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623853502Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623922804Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624039805Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624107507Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624126407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624144107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624160307Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624175908Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624196708Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624219708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624236209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624253309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624272809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624287709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624302610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624316910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624332710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624347810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624365211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624378911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624392911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624407211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624427612Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624451912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624466412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624481113Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624593314Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624647415Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624664716Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624676916Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624864319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624907819Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624930820Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.625576530Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.625832334Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.625898536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.625920936Z" level=info msg="containerd successfully booted in 0.048592s"
	Apr 29 21:26:05 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:05.601625977Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:26:05 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:05.657267921Z" level=info msg="Loading containers: start."
	Apr 29 21:26:05 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:05.945450098Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 21:26:06 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:06.042193124Z" level=info msg="Loading containers: done."
	Apr 29 21:26:06 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:06.077489865Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:26:06 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:06.078370951Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:26:06 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:06.136943123Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:26:06 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:06.137044022Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:26:06 kubernetes-upgrade-262400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:26:35 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:35.048403001Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:26:35 kubernetes-upgrade-262400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:26:35 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:35.053365351Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:26:35 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:35.053797447Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:26:35 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:35.053942146Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:26:35 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:35.054140044Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:26:36 kubernetes-upgrade-262400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:26:36 kubernetes-upgrade-262400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:26:36 kubernetes-upgrade-262400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:26:36 kubernetes-upgrade-262400 dockerd[1109]: time="2024-04-29T21:26:36.154235277Z" level=info msg="Starting up"
	Apr 29 21:27:36 kubernetes-upgrade-262400 dockerd[1109]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 21:27:36 kubernetes-upgrade-262400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 21:27:36 kubernetes-upgrade-262400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 21:27:36 kubernetes-upgrade-262400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 21:26:04 kubernetes-upgrade-262400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:04.538568323Z" level=info msg="Starting up"
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:04.540122848Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:04.541443769Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.582077027Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.611649105Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.612116213Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.612408917Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.612536119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.613637337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.613885141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.614143245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.614256247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.614281248Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.614293748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.614988259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.615990575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.619152426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.619299229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.619520832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.619643134Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.620520949Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.620649951Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.620672251Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.622724384Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.622890187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.622924587Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.622943688Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.622959288Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623036289Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623488097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623608798Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623718500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623853502Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.623922804Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624039805Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624107507Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624126407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624144107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624160307Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624175908Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624196708Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624219708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624236209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624253309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624272809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624287709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624302610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624316910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624332710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624347810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624365211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624378911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624392911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624407211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624427612Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624451912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624466412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624481113Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624593314Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624647415Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624664716Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624676916Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624864319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624907819Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.624930820Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.625576530Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.625832334Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.625898536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:26:04 kubernetes-upgrade-262400 dockerd[658]: time="2024-04-29T21:26:04.625920936Z" level=info msg="containerd successfully booted in 0.048592s"
	Apr 29 21:26:05 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:05.601625977Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:26:05 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:05.657267921Z" level=info msg="Loading containers: start."
	Apr 29 21:26:05 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:05.945450098Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 21:26:06 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:06.042193124Z" level=info msg="Loading containers: done."
	Apr 29 21:26:06 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:06.077489865Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:26:06 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:06.078370951Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:26:06 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:06.136943123Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:26:06 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:06.137044022Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:26:06 kubernetes-upgrade-262400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:26:35 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:35.048403001Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:26:35 kubernetes-upgrade-262400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:26:35 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:35.053365351Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:26:35 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:35.053797447Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:26:35 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:35.053942146Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:26:35 kubernetes-upgrade-262400 dockerd[651]: time="2024-04-29T21:26:35.054140044Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:26:36 kubernetes-upgrade-262400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:26:36 kubernetes-upgrade-262400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:26:36 kubernetes-upgrade-262400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:26:36 kubernetes-upgrade-262400 dockerd[1109]: time="2024-04-29T21:26:36.154235277Z" level=info msg="Starting up"
	Apr 29 21:27:36 kubernetes-upgrade-262400 dockerd[1109]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 21:27:36 kubernetes-upgrade-262400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 21:27:36 kubernetes-upgrade-262400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 21:27:36 kubernetes-upgrade-262400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 21:27:36.236268   13616 out.go:239] * 
	* 
	W0429 21:27:36.238447   13616 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 21:27:36.241321   13616 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-262400 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv : exit status 90
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-262400 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-262400 version --output=json: exit status 1 (168.8672ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-262400" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-04-29 21:27:36.6323295 +0000 UTC m=+10018.696473101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-262400 -n kubernetes-upgrade-262400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-262400 -n kubernetes-upgrade-262400: exit status 6 (12.6385625s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:27:36.766005   14104 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0429 21:27:49.201919   14104 status.go:417] kubeconfig endpoint: get endpoint: "kubernetes-upgrade-262400" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-262400" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-262400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-262400
E0429 21:28:10.243133   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-262400: (1m2.5982268s)
--- FAIL: TestKubernetesUpgrade (1101.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-262400 --driver=hyperv
E0429 21:13:10.245460   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-262400 --driver=hyperv: exit status 1 (4m59.5643814s)

                                                
                                                
-- stdout --
	* [NoKubernetes-262400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-262400" primary control-plane node in "NoKubernetes-262400" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:10:31.204779    5816 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-262400 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-262400 -n NoKubernetes-262400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-262400 -n NoKubernetes-262400: exit status 7 (279.374ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:15:30.747733   13976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-262400" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.85s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (511.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-416800 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-416800 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (5m17.4091176s)

                                                
                                                
-- stdout --
	* [pause-416800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "pause-416800" primary control-plane node in "pause-416800" cluster
	* Updating the running hyperv "pause-416800" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:26:26.564474    2584 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 21:26:26.651632    2584 out.go:291] Setting OutFile to fd 868 ...
	I0429 21:26:26.652285    2584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 21:26:26.652356    2584 out.go:304] Setting ErrFile to fd 2024...
	I0429 21:26:26.652356    2584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 21:26:26.679136    2584 out.go:298] Setting JSON to false
	I0429 21:26:26.685137    2584 start.go:129] hostinfo: {"hostname":"minikube6","uptime":27926,"bootTime":1714398060,"procs":208,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 21:26:26.685137    2584 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 21:26:26.689134    2584 out.go:177] * [pause-416800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 21:26:26.692224    2584 notify.go:220] Checking for updates...
	I0429 21:26:26.695137    2584 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 21:26:26.697138    2584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 21:26:26.700137    2584 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 21:26:26.703143    2584 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 21:26:26.705138    2584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 21:26:26.709133    2584 config.go:182] Loaded profile config "pause-416800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 21:26:26.711196    2584 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 21:26:33.315686    2584 out.go:177] * Using the hyperv driver based on existing profile
	I0429 21:26:33.318688    2584 start.go:297] selected driver: hyperv
	I0429 21:26:33.318688    2584 start.go:901] validating driver "hyperv" against &{Name:pause-416800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:pause-416800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.243.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 21:26:33.318688    2584 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 21:26:33.373576    2584 cni.go:84] Creating CNI manager for ""
	I0429 21:26:33.373576    2584 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 21:26:33.374577    2584 start.go:340] cluster config:
	{Name:pause-416800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-416800 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.243.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 21:26:33.374577    2584 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 21:26:33.377844    2584 out.go:177] * Starting "pause-416800" primary control-plane node in "pause-416800" cluster
	I0429 21:26:33.383187    2584 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 21:26:33.383187    2584 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 21:26:33.383187    2584 cache.go:56] Caching tarball of preloaded images
	I0429 21:26:33.383575    2584 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 21:26:33.383575    2584 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 21:26:33.383575    2584 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-416800\config.json ...
	I0429 21:26:33.386581    2584 start.go:360] acquireMachinesLock for pause-416800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 21:29:09.994720    2584 start.go:364] duration metric: took 2m36.6068862s to acquireMachinesLock for "pause-416800"
	I0429 21:29:09.995574    2584 start.go:96] Skipping create...Using existing machine configuration
	I0429 21:29:09.995604    2584 fix.go:54] fixHost starting: 
	I0429 21:29:09.996420    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:29:12.654214    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:29:12.654214    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:12.654214    2584 fix.go:112] recreateIfNeeded on pause-416800: state=Running err=<nil>
	W0429 21:29:12.654214    2584 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 21:29:12.657984    2584 out.go:177] * Updating the running hyperv "pause-416800" VM ...
	I0429 21:29:12.661393    2584 machine.go:94] provisionDockerMachine start ...
	I0429 21:29:12.661942    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:29:15.172450    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:29:15.172450    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:15.172572    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:29:18.128609    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:29:18.128810    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:18.139369    2584 main.go:141] libmachine: Using SSH client type: native
	I0429 21:29:18.140358    2584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.243.17 22 <nil> <nil>}
	I0429 21:29:18.140358    2584 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 21:29:18.299929    2584 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-416800
	
	I0429 21:29:18.299929    2584 buildroot.go:166] provisioning hostname "pause-416800"
	I0429 21:29:18.299929    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:29:20.935062    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:29:20.935140    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:20.935203    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:29:23.504563    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:29:23.504563    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:23.511808    2584 main.go:141] libmachine: Using SSH client type: native
	I0429 21:29:23.511808    2584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.243.17 22 <nil> <nil>}
	I0429 21:29:23.512351    2584 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-416800 && echo "pause-416800" | sudo tee /etc/hostname
	I0429 21:29:23.683824    2584 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-416800
	
	I0429 21:29:23.683824    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:29:25.891439    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:29:25.891520    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:25.891591    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:29:28.615275    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:29:28.615275    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:28.622397    2584 main.go:141] libmachine: Using SSH client type: native
	I0429 21:29:28.623317    2584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.243.17 22 <nil> <nil>}
	I0429 21:29:28.623317    2584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-416800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-416800/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-416800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 21:29:28.770689    2584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 21:29:28.770689    2584 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 21:29:28.770770    2584 buildroot.go:174] setting up certificates
	I0429 21:29:28.770770    2584 provision.go:84] configureAuth start
	I0429 21:29:28.770855    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:29:31.021158    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:29:31.021158    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:31.022183    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:29:33.808511    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:29:33.808511    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:33.808974    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:29:36.097395    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:29:36.097395    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:36.097395    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:29:38.874331    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:29:38.874379    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:38.874444    2584 provision.go:143] copyHostCerts
	I0429 21:29:38.874444    2584 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 21:29:38.874444    2584 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 21:29:38.875073    2584 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 21:29:38.876726    2584 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 21:29:38.876817    2584 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 21:29:38.877032    2584 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 21:29:38.878464    2584 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 21:29:38.878464    2584 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 21:29:38.879092    2584 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 21:29:38.880090    2584 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-416800 san=[127.0.0.1 172.17.243.17 localhost minikube pause-416800]
	I0429 21:29:39.279145    2584 provision.go:177] copyRemoteCerts
	I0429 21:29:39.297129    2584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 21:29:39.297129    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:29:41.572042    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:29:41.572784    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:41.572916    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:29:44.358293    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:29:44.358293    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:44.359303    2584 sshutil.go:53] new ssh client: &{IP:172.17.243.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-416800\id_rsa Username:docker}
	I0429 21:29:44.463121    2584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1658048s)
	I0429 21:29:44.463649    2584 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 21:29:44.521168    2584 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
	I0429 21:29:44.581142    2584 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 21:29:44.642685    2584 provision.go:87] duration metric: took 15.8717414s to configureAuth
	I0429 21:29:44.642764    2584 buildroot.go:189] setting minikube options for container-runtime
	I0429 21:29:44.642829    2584 config.go:182] Loaded profile config "pause-416800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 21:29:44.643490    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:29:46.894535    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:29:46.894772    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:46.894836    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:29:49.545317    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:29:49.545317    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:49.551369    2584 main.go:141] libmachine: Using SSH client type: native
	I0429 21:29:49.552073    2584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.243.17 22 <nil> <nil>}
	I0429 21:29:49.552073    2584 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 21:29:49.690953    2584 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 21:29:49.690953    2584 buildroot.go:70] root file system type: tmpfs
	I0429 21:29:49.690953    2584 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 21:29:49.692237    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:29:51.882360    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:29:51.882360    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:51.882360    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:29:54.628379    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:29:54.628379    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:54.634426    2584 main.go:141] libmachine: Using SSH client type: native
	I0429 21:29:54.634426    2584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.243.17 22 <nil> <nil>}
	I0429 21:29:54.634426    2584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 21:29:54.829251    2584 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 21:29:54.829251    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:29:57.286027    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:29:57.286150    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:29:57.286225    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:30:00.004552    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:30:00.005000    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:00.010836    2584 main.go:141] libmachine: Using SSH client type: native
	I0429 21:30:00.011423    2584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.243.17 22 <nil> <nil>}
	I0429 21:30:00.011423    2584 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 21:30:00.173317    2584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 21:30:00.173317    2584 machine.go:97] duration metric: took 47.5115461s to provisionDockerMachine
	I0429 21:30:00.173317    2584 start.go:293] postStartSetup for "pause-416800" (driver="hyperv")
	I0429 21:30:00.173317    2584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 21:30:00.189028    2584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 21:30:00.189028    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:30:02.395781    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:30:02.396713    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:02.396858    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:30:05.944728    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:30:05.944728    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:05.944728    2584 sshutil.go:53] new ssh client: &{IP:172.17.243.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-416800\id_rsa Username:docker}
	I0429 21:30:06.067127    2584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.8779424s)
	I0429 21:30:06.081472    2584 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 21:30:06.089454    2584 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 21:30:06.089554    2584 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 21:30:06.089984    2584 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 21:30:06.090819    2584 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem -> 137562.pem in /etc/ssl/certs
	I0429 21:30:06.107141    2584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 21:30:06.131783    2584 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /etc/ssl/certs/137562.pem (1708 bytes)
	I0429 21:30:06.187941    2584 start.go:296] duration metric: took 6.0145764s for postStartSetup
	I0429 21:30:06.191117    2584 fix.go:56] duration metric: took 56.1950654s for fixHost
	I0429 21:30:06.191117    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:30:08.393649    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:30:08.393762    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:08.393869    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:30:11.102672    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:30:11.102672    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:11.109346    2584 main.go:141] libmachine: Using SSH client type: native
	I0429 21:30:11.109346    2584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.243.17 22 <nil> <nil>}
	I0429 21:30:11.109346    2584 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 21:30:11.247968    2584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714426211.257672105
	
	I0429 21:30:11.247968    2584 fix.go:216] guest clock: 1714426211.257672105
	I0429 21:30:11.248107    2584 fix.go:229] Guest: 2024-04-29 21:30:11.257672105 +0000 UTC Remote: 2024-04-29 21:30:06.1911173 +0000 UTC m=+219.744679501 (delta=5.066554805s)
	I0429 21:30:11.248315    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:30:13.467463    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:30:13.467463    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:13.467963    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:30:16.216722    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:30:16.216778    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:16.224947    2584 main.go:141] libmachine: Using SSH client type: native
	I0429 21:30:16.225854    2584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.243.17 22 <nil> <nil>}
	I0429 21:30:16.225854    2584 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714426211
	I0429 21:30:16.385367    2584 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 21:30:11 UTC 2024
	
	I0429 21:30:16.385463    2584 fix.go:236] clock set: Mon Apr 29 21:30:11 UTC 2024
	 (err=<nil>)
	I0429 21:30:16.385463    2584 start.go:83] releasing machines lock for "pause-416800", held for 1m6.390216s
	I0429 21:30:16.385832    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:30:18.824539    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:30:18.824539    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:18.824539    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:30:21.707898    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:30:21.847289    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:21.858196    2584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 21:30:21.859140    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:30:21.870787    2584 ssh_runner.go:195] Run: cat /version.json
	I0429 21:30:21.871071    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-416800 ).state
	I0429 21:30:24.191743    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:30:24.191743    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:24.191924    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:30:24.194474    2584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:30:24.194558    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:24.194558    2584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-416800 ).networkadapters[0]).ipaddresses[0]
	I0429 21:30:26.985577    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:30:26.985577    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:26.985577    2584 sshutil.go:53] new ssh client: &{IP:172.17.243.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-416800\id_rsa Username:docker}
	I0429 21:30:27.022570    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:30:27.022570    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:27.023557    2584 sshutil.go:53] new ssh client: &{IP:172.17.243.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-416800\id_rsa Username:docker}
	I0429 21:30:27.082565    2584 ssh_runner.go:235] Completed: cat /version.json: (5.2117371s)
	I0429 21:30:27.099566    2584 ssh_runner.go:195] Run: systemctl --version
	I0429 21:30:29.114386    2584 ssh_runner.go:235] Completed: systemctl --version: (2.0148034s)
	I0429 21:30:29.115006    2584 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.2565873s)
	W0429 21:30:29.115285    2584 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0429 21:30:29.115671    2584 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W0429 21:30:29.115932    2584 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0429 21:30:29.136937    2584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 21:30:29.151554    2584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 21:30:29.168177    2584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 21:30:29.195412    2584 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 21:30:29.195412    2584 start.go:494] detecting cgroup driver to use...
	I0429 21:30:29.195412    2584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 21:30:29.267125    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 21:30:29.319461    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 21:30:29.357075    2584 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 21:30:29.381075    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 21:30:29.442285    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 21:30:29.486896    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 21:30:29.537918    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 21:30:29.594891    2584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 21:30:29.653782    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 21:30:29.694596    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 21:30:29.735709    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 21:30:29.785727    2584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 21:30:29.825661    2584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 21:30:29.873687    2584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 21:30:30.358445    2584 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 21:30:30.413057    2584 start.go:494] detecting cgroup driver to use...
	I0429 21:30:30.437306    2584 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 21:30:30.501885    2584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 21:30:30.546878    2584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 21:30:30.619534    2584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 21:30:30.691014    2584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 21:30:30.723029    2584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 21:30:30.820235    2584 ssh_runner.go:195] Run: which cri-dockerd
	I0429 21:30:30.862335    2584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 21:30:30.898592    2584 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 21:30:30.972340    2584 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 21:30:31.396915    2584 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 21:30:31.755467    2584 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 21:30:31.755884    2584 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 21:30:31.812954    2584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 21:30:32.218463    2584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 21:31:43.689550    2584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4704657s)
	I0429 21:31:43.704733    2584 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 21:31:43.775543    2584 out.go:177] 
	W0429 21:31:43.778152    2584 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 21:24:17 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:24:17 pause-416800 dockerd[655]: time="2024-04-29T21:24:17.627211246Z" level=info msg="Starting up"
	Apr 29 21:24:17 pause-416800 dockerd[655]: time="2024-04-29T21:24:17.628407685Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:24:17 pause-416800 dockerd[655]: time="2024-04-29T21:24:17.629693226Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.678950314Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.710969847Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711084050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711323458Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711434562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711570566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711677570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712121184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712280589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712307690Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712320390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712425794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712929110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.715949207Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716089412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716361621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716406122Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716527226Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716682231Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716726732Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746659397Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746878205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746910806Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746932106Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746952007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.747107712Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.747684831Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748291350Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748352552Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748385753Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748409654Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748434855Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748456355Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748482656Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748543658Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748565359Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748603560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748624061Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748655462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748687663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748751365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748795266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748855368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748876469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748895970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748916070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748936471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748982472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749004473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749025474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749042574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749068375Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749099876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749121577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749142378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749224980Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749943103Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.750222512Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.750432719Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.750857533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.751022438Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.751579656Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.752779895Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.752934200Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.753010902Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.753037203Z" level=info msg="containerd successfully booted in 0.076654s"
	Apr 29 21:24:18 pause-416800 dockerd[655]: time="2024-04-29T21:24:18.703111290Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:24:18 pause-416800 dockerd[655]: time="2024-04-29T21:24:18.738204250Z" level=info msg="Loading containers: start."
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.054550940Z" level=info msg="Loading containers: done."
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.084144193Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.084331898Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.212016146Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.212412557Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:24:19 pause-416800 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.766966771Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:24:51 pause-416800 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.770148978Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.771317881Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.771374881Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.771426981Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:24:52 pause-416800 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:24:52 pause-416800 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:24:52 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:24:52 pause-416800 dockerd[1013]: time="2024-04-29T21:24:52.862162617Z" level=info msg="Starting up"
	Apr 29 21:24:52 pause-416800 dockerd[1013]: time="2024-04-29T21:24:52.864000922Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:24:52 pause-416800 dockerd[1013]: time="2024-04-29T21:24:52.869958936Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1020
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.903514914Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.937816193Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.937972294Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938035194Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938090094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938130194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938197794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938412195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938555795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938579995Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938592095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938621595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938819196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.942763205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.942907305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943227706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943338206Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943392106Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943495907Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943514407Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943748007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943907108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943953908Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943975308Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943991308Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.944220208Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.944714809Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.944920510Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945271211Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945390411Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945434311Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945540911Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945564911Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945599111Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945633112Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945656312Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945672712Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945687512Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945714212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945732112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945747012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945762812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945777512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946269113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946294213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946313913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946330713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946350413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946364813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946379413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946394913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946414913Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946440913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946529814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946550814Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946632314Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946835514Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946949815Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946972015Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947045415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947128815Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947152615Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947765017Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947900317Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.948024517Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.948089117Z" level=info msg="containerd successfully booted in 0.045741s"
	Apr 29 21:24:53 pause-416800 dockerd[1013]: time="2024-04-29T21:24:53.919350476Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:24:53 pause-416800 dockerd[1013]: time="2024-04-29T21:24:53.942953531Z" level=info msg="Loading containers: start."
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.156318927Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.253702753Z" level=info msg="Loading containers: done."
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.278231910Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.278430711Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:24:54 pause-416800 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.337975249Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.338619251Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.421392072Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.424167378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.424922480Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.425123880Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.425595281Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:25:07 pause-416800 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:25:08 pause-416800 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:25:08 pause-416800 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:25:08 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:25:08 pause-416800 dockerd[1318]: time="2024-04-29T21:25:08.514335713Z" level=info msg="Starting up"
	Apr 29 21:25:08 pause-416800 dockerd[1318]: time="2024-04-29T21:25:08.515340415Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:25:08 pause-416800 dockerd[1318]: time="2024-04-29T21:25:08.519939626Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1324
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.560165920Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591448392Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591618393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591808593Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591928293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.592015494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.592135494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.593715198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.593928998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594166999Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594189299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594354299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594668300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.597981608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598187308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598468209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598584709Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598621109Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598644009Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598656209Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598818210Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598882010Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598906110Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598924910Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598943910Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599028010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599572611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599770812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599881712Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599915212Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599939512Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599956412Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599972012Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599990912Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600035212Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600113013Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600133913Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600172813Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600205713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600223313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600353913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600375613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600390413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600406413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600421513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600436813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600460613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600483513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600497713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600517713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600533914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600565714Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600612814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600645514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600672714Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600831214Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600938314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600958114Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600970915Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601044115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601139915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601176415Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601569916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601732616Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.602040817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.602127717Z" level=info msg="containerd successfully booted in 0.044753s"
	Apr 29 21:25:09 pause-416800 dockerd[1318]: time="2024-04-29T21:25:09.568563964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.411662825Z" level=info msg="Loading containers: start."
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.615595599Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.721449045Z" level=info msg="Loading containers: done."
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.750303212Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.750790913Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.805303840Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:25:10 pause-416800 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.807829146Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.606425824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.606496821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.606570118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.607592879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.623147474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.623943043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.624244632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.626386948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.719675324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.719927914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.719949513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.720297999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760656531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760731228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760746928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760839824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.059597766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.060044350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.060251342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.060579630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366461432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366749622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366806620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366942615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414168301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414262998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414298596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414404993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441025627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441145322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441160622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441265218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.596447790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.598287877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.598455176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.598838073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844422752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844491151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844505151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844606550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.895839891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.896657886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.896883084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.903961635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.228771091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.229506586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.229740485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.231011777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093311009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093466710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093484810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093596411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.126263544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.129676068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.129724968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.134574903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:56 pause-416800 dockerd[1318]: time="2024-04-29T21:25:56.582781702Z" level=info msg="ignoring event" container=143a22070c0c0a7b387153dda7779cd56d9cabd3c0bffc447f7404c1b8d9913f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.586196426Z" level=info msg="shim disconnected" id=143a22070c0c0a7b387153dda7779cd56d9cabd3c0bffc447f7404c1b8d9913f namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.586362427Z" level=warning msg="cleaning up after shim disconnected" id=143a22070c0c0a7b387153dda7779cd56d9cabd3c0bffc447f7404c1b8d9913f namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.586384027Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.800643235Z" level=info msg="shim disconnected" id=0600823b5b43fef50833297d9ceed953dc608849a1f8f13fd2b4cba160ab9559 namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.801013438Z" level=warning msg="cleaning up after shim disconnected" id=0600823b5b43fef50833297d9ceed953dc608849a1f8f13fd2b4cba160ab9559 namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.801232940Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1318]: time="2024-04-29T21:25:56.801693743Z" level=info msg="ignoring event" container=0600823b5b43fef50833297d9ceed953dc608849a1f8f13fd2b4cba160ab9559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.260439877Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:30:32 pause-416800 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.568855275Z" level=info msg="ignoring event" container=5c8f5267a3cf475e316d3da584a95cc218c8b2a4230353e77ac850741880d1f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.569774580Z" level=info msg="shim disconnected" id=5c8f5267a3cf475e316d3da584a95cc218c8b2a4230353e77ac850741880d1f3 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.573119596Z" level=warning msg="cleaning up after shim disconnected" id=5c8f5267a3cf475e316d3da584a95cc218c8b2a4230353e77ac850741880d1f3 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.573571898Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.607146261Z" level=info msg="shim disconnected" id=fa0e7451cce167c319c12849528c38723c094cded94adf70f0471c25757ae2e9 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.607332862Z" level=warning msg="cleaning up after shim disconnected" id=fa0e7451cce167c319c12849528c38723c094cded94adf70f0471c25757ae2e9 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.607463163Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.608261866Z" level=info msg="ignoring event" container=fa0e7451cce167c319c12849528c38723c094cded94adf70f0471c25757ae2e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.623792242Z" level=info msg="ignoring event" container=f647857d7c137481de3268cb9c2654392b71c81e81058fcb7f35c1190433a6e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.624257044Z" level=info msg="shim disconnected" id=f647857d7c137481de3268cb9c2654392b71c81e81058fcb7f35c1190433a6e1 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.625775252Z" level=warning msg="cleaning up after shim disconnected" id=f647857d7c137481de3268cb9c2654392b71c81e81058fcb7f35c1190433a6e1 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.625906352Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.633692290Z" level=info msg="ignoring event" container=becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.634674195Z" level=info msg="shim disconnected" id=becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.634789095Z" level=warning msg="cleaning up after shim disconnected" id=becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.634854896Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.651124375Z" level=info msg="ignoring event" container=45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.655334295Z" level=info msg="shim disconnected" id=45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.656874203Z" level=warning msg="cleaning up after shim disconnected" id=45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.657076404Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.658227909Z" level=info msg="ignoring event" container=9baf3eb6e35e2a7ac754a8b432fcf966fb71e817d62f3bfe5d7e5f3ac09d6a9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.660480020Z" level=info msg="shim disconnected" id=9baf3eb6e35e2a7ac754a8b432fcf966fb71e817d62f3bfe5d7e5f3ac09d6a9f namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.660635321Z" level=warning msg="cleaning up after shim disconnected" id=9baf3eb6e35e2a7ac754a8b432fcf966fb71e817d62f3bfe5d7e5f3ac09d6a9f namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.660767522Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.682335926Z" level=info msg="shim disconnected" id=93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.682727028Z" level=warning msg="cleaning up after shim disconnected" id=93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.690600767Z" level=info msg="ignoring event" container=93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.690728567Z" level=info msg="ignoring event" container=fb1fdb294cb6a0cad43ec59e1aef0133cc72405900a4da100246efb834b9b250 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.690769567Z" level=info msg="ignoring event" container=6804050c7ca2aa5b7de1e60b9c79de1967a6fb06aad45f5ae3fba7964b22edda module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.690408666Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.694207284Z" level=info msg="shim disconnected" id=6804050c7ca2aa5b7de1e60b9c79de1967a6fb06aad45f5ae3fba7964b22edda namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.694956788Z" level=warning msg="cleaning up after shim disconnected" id=6804050c7ca2aa5b7de1e60b9c79de1967a6fb06aad45f5ae3fba7964b22edda namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.695260289Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.702901026Z" level=info msg="shim disconnected" id=fb1fdb294cb6a0cad43ec59e1aef0133cc72405900a4da100246efb834b9b250 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.703184628Z" level=warning msg="cleaning up after shim disconnected" id=fb1fdb294cb6a0cad43ec59e1aef0133cc72405900a4da100246efb834b9b250 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.703319028Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.728201349Z" level=info msg="ignoring event" container=7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.729420255Z" level=info msg="shim disconnected" id=7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.729627256Z" level=warning msg="cleaning up after shim disconnected" id=7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.729930858Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:37 pause-416800 dockerd[1318]: time="2024-04-29T21:30:37.420909147Z" level=info msg="ignoring event" container=c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:37 pause-416800 dockerd[1324]: time="2024-04-29T21:30:37.425478770Z" level=info msg="shim disconnected" id=c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01 namespace=moby
	Apr 29 21:30:37 pause-416800 dockerd[1324]: time="2024-04-29T21:30:37.425705171Z" level=warning msg="cleaning up after shim disconnected" id=c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01 namespace=moby
	Apr 29 21:30:37 pause-416800 dockerd[1324]: time="2024-04-29T21:30:37.425728471Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.438113999Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1
	Apr 29 21:30:42 pause-416800 dockerd[1324]: time="2024-04-29T21:30:42.489671939Z" level=info msg="shim disconnected" id=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1 namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.489674839Z" level=info msg="ignoring event" container=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:42 pause-416800 dockerd[1324]: time="2024-04-29T21:30:42.489759140Z" level=warning msg="cleaning up after shim disconnected" id=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1 namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1324]: time="2024-04-29T21:30:42.489777640Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.568187753Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.569158555Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.569301356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.569338456Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:30:43 pause-416800 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:30:43 pause-416800 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:30:43 pause-416800 systemd[1]: docker.service: Consumed 9.301s CPU time.
	Apr 29 21:30:43 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:30:43 pause-416800 dockerd[4608]: time="2024-04-29T21:30:43.658828498Z" level=info msg="Starting up"
	Apr 29 21:31:43 pause-416800 dockerd[4608]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 21:31:43 pause-416800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 21:31:43 pause-416800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 21:31:43 pause-416800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 21:24:17 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:24:17 pause-416800 dockerd[655]: time="2024-04-29T21:24:17.627211246Z" level=info msg="Starting up"
	Apr 29 21:24:17 pause-416800 dockerd[655]: time="2024-04-29T21:24:17.628407685Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:24:17 pause-416800 dockerd[655]: time="2024-04-29T21:24:17.629693226Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.678950314Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.710969847Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711084050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711323458Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711434562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711570566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711677570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712121184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712280589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712307690Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712320390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712425794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712929110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.715949207Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716089412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716361621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716406122Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716527226Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716682231Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716726732Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746659397Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746878205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746910806Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746932106Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746952007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.747107712Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.747684831Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748291350Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748352552Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748385753Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748409654Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748434855Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748456355Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748482656Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748543658Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748565359Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748603560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748624061Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748655462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748687663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748751365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748795266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748855368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748876469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748895970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748916070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748936471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748982472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749004473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749025474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749042574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749068375Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749099876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749121577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749142378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749224980Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749943103Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.750222512Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.750432719Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.750857533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.751022438Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.751579656Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.752779895Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.752934200Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.753010902Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.753037203Z" level=info msg="containerd successfully booted in 0.076654s"
	Apr 29 21:24:18 pause-416800 dockerd[655]: time="2024-04-29T21:24:18.703111290Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:24:18 pause-416800 dockerd[655]: time="2024-04-29T21:24:18.738204250Z" level=info msg="Loading containers: start."
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.054550940Z" level=info msg="Loading containers: done."
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.084144193Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.084331898Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.212016146Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.212412557Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:24:19 pause-416800 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.766966771Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:24:51 pause-416800 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.770148978Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.771317881Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.771374881Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.771426981Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:24:52 pause-416800 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:24:52 pause-416800 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:24:52 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:24:52 pause-416800 dockerd[1013]: time="2024-04-29T21:24:52.862162617Z" level=info msg="Starting up"
	Apr 29 21:24:52 pause-416800 dockerd[1013]: time="2024-04-29T21:24:52.864000922Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:24:52 pause-416800 dockerd[1013]: time="2024-04-29T21:24:52.869958936Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1020
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.903514914Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.937816193Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.937972294Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938035194Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938090094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938130194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938197794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938412195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938555795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938579995Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938592095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938621595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938819196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.942763205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.942907305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943227706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943338206Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943392106Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943495907Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943514407Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943748007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943907108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943953908Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943975308Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943991308Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.944220208Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.944714809Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.944920510Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945271211Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945390411Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945434311Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945540911Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945564911Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945599111Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945633112Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945656312Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945672712Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945687512Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945714212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945732112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945747012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945762812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945777512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946269113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946294213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946313913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946330713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946350413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946364813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946379413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946394913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946414913Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946440913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946529814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946550814Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946632314Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946835514Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946949815Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946972015Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947045415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947128815Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947152615Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947765017Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947900317Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.948024517Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.948089117Z" level=info msg="containerd successfully booted in 0.045741s"
	Apr 29 21:24:53 pause-416800 dockerd[1013]: time="2024-04-29T21:24:53.919350476Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:24:53 pause-416800 dockerd[1013]: time="2024-04-29T21:24:53.942953531Z" level=info msg="Loading containers: start."
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.156318927Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.253702753Z" level=info msg="Loading containers: done."
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.278231910Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.278430711Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:24:54 pause-416800 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.337975249Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.338619251Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.421392072Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.424167378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.424922480Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.425123880Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.425595281Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:25:07 pause-416800 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:25:08 pause-416800 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:25:08 pause-416800 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:25:08 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:25:08 pause-416800 dockerd[1318]: time="2024-04-29T21:25:08.514335713Z" level=info msg="Starting up"
	Apr 29 21:25:08 pause-416800 dockerd[1318]: time="2024-04-29T21:25:08.515340415Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:25:08 pause-416800 dockerd[1318]: time="2024-04-29T21:25:08.519939626Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1324
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.560165920Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591448392Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591618393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591808593Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591928293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.592015494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.592135494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.593715198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.593928998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594166999Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594189299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594354299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594668300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.597981608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598187308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598468209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598584709Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598621109Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598644009Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598656209Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598818210Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598882010Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598906110Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598924910Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598943910Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599028010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599572611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599770812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599881712Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599915212Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599939512Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599956412Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599972012Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599990912Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600035212Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600113013Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600133913Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600172813Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600205713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600223313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600353913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600375613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600390413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600406413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600421513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600436813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600460613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600483513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600497713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600517713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600533914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600565714Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600612814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600645514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600672714Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600831214Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600938314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600958114Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600970915Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601044115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601139915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601176415Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601569916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601732616Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.602040817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.602127717Z" level=info msg="containerd successfully booted in 0.044753s"
	Apr 29 21:25:09 pause-416800 dockerd[1318]: time="2024-04-29T21:25:09.568563964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.411662825Z" level=info msg="Loading containers: start."
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.615595599Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.721449045Z" level=info msg="Loading containers: done."
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.750303212Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.750790913Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.805303840Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:25:10 pause-416800 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.807829146Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.606425824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.606496821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.606570118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.607592879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.623147474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.623943043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.624244632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.626386948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.719675324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.719927914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.719949513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.720297999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760656531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760731228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760746928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760839824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.059597766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.060044350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.060251342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.060579630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366461432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366749622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366806620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366942615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414168301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414262998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414298596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414404993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441025627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441145322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441160622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441265218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.596447790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.598287877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.598455176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.598838073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844422752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844491151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844505151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844606550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.895839891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.896657886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.896883084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.903961635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.228771091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.229506586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.229740485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.231011777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093311009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093466710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093484810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093596411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.126263544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.129676068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.129724968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.134574903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:56 pause-416800 dockerd[1318]: time="2024-04-29T21:25:56.582781702Z" level=info msg="ignoring event" container=143a22070c0c0a7b387153dda7779cd56d9cabd3c0bffc447f7404c1b8d9913f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.586196426Z" level=info msg="shim disconnected" id=143a22070c0c0a7b387153dda7779cd56d9cabd3c0bffc447f7404c1b8d9913f namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.586362427Z" level=warning msg="cleaning up after shim disconnected" id=143a22070c0c0a7b387153dda7779cd56d9cabd3c0bffc447f7404c1b8d9913f namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.586384027Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.800643235Z" level=info msg="shim disconnected" id=0600823b5b43fef50833297d9ceed953dc608849a1f8f13fd2b4cba160ab9559 namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.801013438Z" level=warning msg="cleaning up after shim disconnected" id=0600823b5b43fef50833297d9ceed953dc608849a1f8f13fd2b4cba160ab9559 namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.801232940Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1318]: time="2024-04-29T21:25:56.801693743Z" level=info msg="ignoring event" container=0600823b5b43fef50833297d9ceed953dc608849a1f8f13fd2b4cba160ab9559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.260439877Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:30:32 pause-416800 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.568855275Z" level=info msg="ignoring event" container=5c8f5267a3cf475e316d3da584a95cc218c8b2a4230353e77ac850741880d1f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.569774580Z" level=info msg="shim disconnected" id=5c8f5267a3cf475e316d3da584a95cc218c8b2a4230353e77ac850741880d1f3 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.573119596Z" level=warning msg="cleaning up after shim disconnected" id=5c8f5267a3cf475e316d3da584a95cc218c8b2a4230353e77ac850741880d1f3 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.573571898Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.607146261Z" level=info msg="shim disconnected" id=fa0e7451cce167c319c12849528c38723c094cded94adf70f0471c25757ae2e9 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.607332862Z" level=warning msg="cleaning up after shim disconnected" id=fa0e7451cce167c319c12849528c38723c094cded94adf70f0471c25757ae2e9 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.607463163Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.608261866Z" level=info msg="ignoring event" container=fa0e7451cce167c319c12849528c38723c094cded94adf70f0471c25757ae2e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.623792242Z" level=info msg="ignoring event" container=f647857d7c137481de3268cb9c2654392b71c81e81058fcb7f35c1190433a6e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.624257044Z" level=info msg="shim disconnected" id=f647857d7c137481de3268cb9c2654392b71c81e81058fcb7f35c1190433a6e1 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.625775252Z" level=warning msg="cleaning up after shim disconnected" id=f647857d7c137481de3268cb9c2654392b71c81e81058fcb7f35c1190433a6e1 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.625906352Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.633692290Z" level=info msg="ignoring event" container=becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.634674195Z" level=info msg="shim disconnected" id=becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.634789095Z" level=warning msg="cleaning up after shim disconnected" id=becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.634854896Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.651124375Z" level=info msg="ignoring event" container=45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.655334295Z" level=info msg="shim disconnected" id=45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.656874203Z" level=warning msg="cleaning up after shim disconnected" id=45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.657076404Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.658227909Z" level=info msg="ignoring event" container=9baf3eb6e35e2a7ac754a8b432fcf966fb71e817d62f3bfe5d7e5f3ac09d6a9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.660480020Z" level=info msg="shim disconnected" id=9baf3eb6e35e2a7ac754a8b432fcf966fb71e817d62f3bfe5d7e5f3ac09d6a9f namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.660635321Z" level=warning msg="cleaning up after shim disconnected" id=9baf3eb6e35e2a7ac754a8b432fcf966fb71e817d62f3bfe5d7e5f3ac09d6a9f namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.660767522Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.682335926Z" level=info msg="shim disconnected" id=93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.682727028Z" level=warning msg="cleaning up after shim disconnected" id=93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.690600767Z" level=info msg="ignoring event" container=93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.690728567Z" level=info msg="ignoring event" container=fb1fdb294cb6a0cad43ec59e1aef0133cc72405900a4da100246efb834b9b250 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.690769567Z" level=info msg="ignoring event" container=6804050c7ca2aa5b7de1e60b9c79de1967a6fb06aad45f5ae3fba7964b22edda module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.690408666Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.694207284Z" level=info msg="shim disconnected" id=6804050c7ca2aa5b7de1e60b9c79de1967a6fb06aad45f5ae3fba7964b22edda namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.694956788Z" level=warning msg="cleaning up after shim disconnected" id=6804050c7ca2aa5b7de1e60b9c79de1967a6fb06aad45f5ae3fba7964b22edda namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.695260289Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.702901026Z" level=info msg="shim disconnected" id=fb1fdb294cb6a0cad43ec59e1aef0133cc72405900a4da100246efb834b9b250 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.703184628Z" level=warning msg="cleaning up after shim disconnected" id=fb1fdb294cb6a0cad43ec59e1aef0133cc72405900a4da100246efb834b9b250 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.703319028Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.728201349Z" level=info msg="ignoring event" container=7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.729420255Z" level=info msg="shim disconnected" id=7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.729627256Z" level=warning msg="cleaning up after shim disconnected" id=7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.729930858Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:37 pause-416800 dockerd[1318]: time="2024-04-29T21:30:37.420909147Z" level=info msg="ignoring event" container=c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:37 pause-416800 dockerd[1324]: time="2024-04-29T21:30:37.425478770Z" level=info msg="shim disconnected" id=c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01 namespace=moby
	Apr 29 21:30:37 pause-416800 dockerd[1324]: time="2024-04-29T21:30:37.425705171Z" level=warning msg="cleaning up after shim disconnected" id=c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01 namespace=moby
	Apr 29 21:30:37 pause-416800 dockerd[1324]: time="2024-04-29T21:30:37.425728471Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.438113999Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1
	Apr 29 21:30:42 pause-416800 dockerd[1324]: time="2024-04-29T21:30:42.489671939Z" level=info msg="shim disconnected" id=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1 namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.489674839Z" level=info msg="ignoring event" container=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:42 pause-416800 dockerd[1324]: time="2024-04-29T21:30:42.489759140Z" level=warning msg="cleaning up after shim disconnected" id=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1 namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1324]: time="2024-04-29T21:30:42.489777640Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.568187753Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.569158555Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.569301356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.569338456Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:30:43 pause-416800 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:30:43 pause-416800 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:30:43 pause-416800 systemd[1]: docker.service: Consumed 9.301s CPU time.
	Apr 29 21:30:43 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:30:43 pause-416800 dockerd[4608]: time="2024-04-29T21:30:43.658828498Z" level=info msg="Starting up"
	Apr 29 21:31:43 pause-416800 dockerd[4608]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 21:31:43 pause-416800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 21:31:43 pause-416800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 21:31:43 pause-416800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 21:31:43.779435    2584 out.go:239] * 
	* 
	W0429 21:31:43.780659    2584 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 21:31:43.785834    2584 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-windows-amd64.exe start -p pause-416800 --alsologtostderr -v=1 --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-416800 -n pause-416800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-416800 -n pause-416800: exit status 2 (13.036635s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:31:44.395219    7612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-416800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-416800 logs -n 25: (2m47.579622s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-424300       | scheduled-stop-424300     | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:08 UTC | 29 Apr 24 21:08 UTC |
	|         | --schedule 5s                  |                           |                   |         |                     |                     |
	| delete  | -p scheduled-stop-424300       | scheduled-stop-424300     | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:10 UTC | 29 Apr 24 21:10 UTC |
	| start   | -p kubernetes-upgrade-262400   | kubernetes-upgrade-262400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:10 UTC |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p force-systemd-flag-262400   | force-systemd-flag-262400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:10 UTC | 29 Apr 24 21:13 UTC |
	|         | --memory=2048 --force-systemd  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p offline-docker-186800       | offline-docker-186800     | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:10 UTC | 29 Apr 24 21:17 UTC |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --memory=2048 --wait=true      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-262400         | NoKubernetes-262400       | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:10 UTC |                     |
	|         | --no-kubernetes                |                           |                   |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-262400         | NoKubernetes-262400       | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:10 UTC |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-262400      | force-systemd-flag-262400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:13 UTC | 29 Apr 24 21:14 UTC |
	|         | ssh docker info --format       |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-262400   | force-systemd-flag-262400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:14 UTC | 29 Apr 24 21:14 UTC |
	| start   | -p stopped-upgrade-467400      | minikube                  | minikube6\jenkins | v1.26.0 | 29 Apr 24 21:14 GMT | 29 Apr 24 21:21 GMT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv             |                           |                   |         |                     |                     |
	| delete  | -p NoKubernetes-262400         | NoKubernetes-262400       | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:15 UTC | 29 Apr 24 21:15 UTC |
	| start   | -p running-upgrade-013100      | minikube                  | minikube6\jenkins | v1.26.0 | 29 Apr 24 21:15 GMT | 29 Apr 24 21:23 GMT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv             |                           |                   |         |                     |                     |
	| delete  | -p offline-docker-186800       | offline-docker-186800     | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:17 UTC | 29 Apr 24 21:17 UTC |
	| start   | -p pause-416800 --memory=2048  | pause-416800              | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:17 UTC | 29 Apr 24 21:26 UTC |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv     |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-262400   | kubernetes-upgrade-262400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:18 UTC | 29 Apr 24 21:20 UTC |
	| start   | -p kubernetes-upgrade-262400   | kubernetes-upgrade-262400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:20 UTC |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| stop    | stopped-upgrade-467400 stop    | minikube                  | minikube6\jenkins | v1.26.0 | 29 Apr 24 21:21 GMT | 29 Apr 24 21:21 GMT |
	| start   | -p stopped-upgrade-467400      | stopped-upgrade-467400    | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:21 UTC | 29 Apr 24 21:29 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-013100      | running-upgrade-013100    | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:23 UTC | 29 Apr 24 21:31 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p pause-416800                | pause-416800              | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p kubernetes-upgrade-262400   | kubernetes-upgrade-262400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:27 UTC | 29 Apr 24 21:28 UTC |
	| start   | -p cert-expiration-004200      | cert-expiration-004200    | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:28 UTC |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-467400      | stopped-upgrade-467400    | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:29 UTC | 29 Apr 24 21:30 UTC |
	| start   | -p docker-flags-286800         | docker-flags-286800       | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:30 UTC |                     |
	|         | --cache-images=false           |                           |                   |         |                     |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=false                   |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR           |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT           |                           |                   |         |                     |                     |
	|         | --docker-opt=debug             |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-013100      | running-upgrade-013100    | minikube6\jenkins | v1.33.0 | 29 Apr 24 21:31 UTC |                     |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 21:30:27
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 21:30:27.629030   13940 out.go:291] Setting OutFile to fd 1164 ...
	I0429 21:30:27.630028   13940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 21:30:27.630028   13940 out.go:304] Setting ErrFile to fd 1036...
	I0429 21:30:27.630028   13940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 21:30:27.655032   13940 out.go:298] Setting JSON to false
	I0429 21:30:27.659024   13940 start.go:129] hostinfo: {"hostname":"minikube6","uptime":28167,"bootTime":1714398060,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 21:30:27.660096   13940 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 21:30:27.668046   13940 out.go:177] * [docker-flags-286800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 21:30:27.679053   13940 notify.go:220] Checking for updates...
	I0429 21:30:27.681078   13940 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 21:30:27.684056   13940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 21:30:27.686051   13940 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 21:30:27.689037   13940 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 21:30:27.692065   13940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 21:30:25.134413    7500 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load": (4.2225649s)
	I0429 21:30:25.134529    7500 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.3-0 from cache
	I0429 21:30:25.134745    7500 cache_images.go:92] duration metric: took 7.6784504s to LoadCachedImages
	W0429 21:30:25.135060    7500 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.24.1: The system cannot find the file specified.
	I0429 21:30:25.135060    7500 kubeadm.go:928] updating node { 172.17.255.204 8443 v1.24.1 docker true true} ...
	I0429 21:30:25.135060    7500 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-013100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.255.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-013100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 21:30:25.153812    7500 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 21:30:25.215092    7500 cni.go:84] Creating CNI manager for ""
	I0429 21:30:25.215092    7500 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 21:30:25.215092    7500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 21:30:25.215092    7500 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.255.204 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-013100 NodeName:running-upgrade-013100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.255.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.255.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 21:30:25.215092    7500 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.255.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-013100"
	  kubeletExtraArgs:
	    node-ip: 172.17.255.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.255.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 21:30:25.233095    7500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0429 21:30:25.251950    7500 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 21:30:25.266580    7500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 21:30:25.284687    7500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0429 21:30:25.314445    7500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 21:30:25.345116    7500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
	I0429 21:30:25.390801    7500 ssh_runner.go:195] Run: grep 172.17.255.204	control-plane.minikube.internal$ /etc/hosts
	I0429 21:30:25.412499    7500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 21:30:25.633540    7500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 21:30:25.662305    7500 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100 for IP: 172.17.255.204
	I0429 21:30:25.662305    7500 certs.go:194] generating shared ca certs ...
	I0429 21:30:25.662305    7500 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 21:30:25.662674    7500 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 21:30:25.662674    7500 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 21:30:25.663755    7500 certs.go:256] generating profile certs ...
	I0429 21:30:25.663755    7500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\client.key
	I0429 21:30:25.663755    7500 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.key.dbdeca72
	I0429 21:30:25.664767    7500 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.crt.dbdeca72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.255.204]
	I0429 21:30:25.785581    7500 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.crt.dbdeca72 ...
	I0429 21:30:25.785581    7500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.crt.dbdeca72: {Name:mk37ed65b8825a8b4975fbc904b97d342e8d14e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 21:30:25.786776    7500 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.key.dbdeca72 ...
	I0429 21:30:25.787726    7500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.key.dbdeca72: {Name:mkc42e858c8c62a63a286e061d78b2629e8a7fff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 21:30:25.788119    7500 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.crt.dbdeca72 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.crt
	I0429 21:30:25.803302    7500 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.key.dbdeca72 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.key
	I0429 21:30:25.805165    7500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\proxy-client.key
	I0429 21:30:25.808359    7500 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem (1338 bytes)
	W0429 21:30:25.808944    7500 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756_empty.pem, impossibly tiny 0 bytes
	I0429 21:30:25.809127    7500 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0429 21:30:25.809531    7500 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 21:30:25.810065    7500 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 21:30:25.810745    7500 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 21:30:25.811842    7500 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem (1708 bytes)
	I0429 21:30:25.814858    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 21:30:25.895161    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 21:30:26.042881    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 21:30:26.148049    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 21:30:26.250542    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 21:30:26.338102    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 21:30:26.385855    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 21:30:26.438356    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-013100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 21:30:26.484646    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\137562.pem --> /usr/share/ca-certificates/137562.pem (1708 bytes)
	I0429 21:30:26.526859    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 21:30:26.570849    7500 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13756.pem --> /usr/share/ca-certificates/13756.pem (1338 bytes)
	I0429 21:30:26.641826    7500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 21:30:26.690852    7500 ssh_runner.go:195] Run: openssl version
	I0429 21:30:26.718181    7500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137562.pem && ln -fs /usr/share/ca-certificates/137562.pem /etc/ssl/certs/137562.pem"
	I0429 21:30:26.757541    7500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137562.pem
	I0429 21:30:26.766551    7500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:59 /usr/share/ca-certificates/137562.pem
	I0429 21:30:26.782552    7500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137562.pem
	I0429 21:30:26.813564    7500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137562.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 21:30:26.908585    7500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 21:30:26.959576    7500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 21:30:26.968573    7500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0429 21:30:26.982641    7500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 21:30:27.009581    7500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 21:30:27.047789    7500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13756.pem && ln -fs /usr/share/ca-certificates/13756.pem /etc/ssl/certs/13756.pem"
	I0429 21:30:27.082565    7500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13756.pem
	I0429 21:30:27.088620    7500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:59 /usr/share/ca-certificates/13756.pem
	I0429 21:30:27.103566    7500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13756.pem
	I0429 21:30:27.138609    7500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13756.pem /etc/ssl/certs/51391683.0"
	I0429 21:30:27.182566    7500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 21:30:27.211916    7500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 21:30:27.252171    7500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 21:30:27.281083    7500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 21:30:27.306554    7500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 21:30:27.337128    7500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 21:30:27.364707    7500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 21:30:27.385050    7500 kubeadm.go:391] StartCluster: {Name:running-upgrade-013100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:runni
ng-upgrade-013100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.255.204 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0429 21:30:27.396029    7500 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 21:30:27.542040    7500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 21:30:27.563039    7500 host.go:66] Checking if "running-upgrade-013100" exists ...
	I0429 21:30:27.563039    7500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-013100 ).state
	I0429 21:30:26.985577    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:30:26.985577    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:26.985577    2584 sshutil.go:53] new ssh client: &{IP:172.17.243.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-416800\id_rsa Username:docker}
	I0429 21:30:27.022570    2584 main.go:141] libmachine: [stdout =====>] : 172.17.243.17
	
	I0429 21:30:27.022570    2584 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:27.023557    2584 sshutil.go:53] new ssh client: &{IP:172.17.243.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-416800\id_rsa Username:docker}
	I0429 21:30:27.082565    2584 ssh_runner.go:235] Completed: cat /version.json: (5.2117371s)
	I0429 21:30:27.099566    2584 ssh_runner.go:195] Run: systemctl --version
	I0429 21:30:29.114386    2584 ssh_runner.go:235] Completed: systemctl --version: (2.0148034s)
	I0429 21:30:29.115006    2584 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.2565873s)
	W0429 21:30:29.115285    2584 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0429 21:30:29.115671    2584 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0429 21:30:29.115932    2584 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0429 21:30:29.136937    2584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 21:30:29.151554    2584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 21:30:29.168177    2584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 21:30:29.195412    2584 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 21:30:29.195412    2584 start.go:494] detecting cgroup driver to use...
	I0429 21:30:29.195412    2584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 21:30:29.267125    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 21:30:29.319461    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 21:30:29.357075    2584 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 21:30:29.381075    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 21:30:29.442285    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 21:30:29.486896    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 21:30:29.537918    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 21:30:29.594891    2584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 21:30:29.653782    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 21:30:29.694596    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 21:30:29.735709    2584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 21:30:29.785727    2584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 21:30:29.825661    2584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 21:30:29.873687    2584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 21:30:30.358445    2584 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 21:30:30.413057    2584 start.go:494] detecting cgroup driver to use...
	I0429 21:30:30.437306    2584 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 21:30:30.501885    2584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 21:30:30.546878    2584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 21:30:30.619534    2584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 21:30:30.691014    2584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 21:30:30.723029    2584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 21:30:30.820235    2584 ssh_runner.go:195] Run: which cri-dockerd
	I0429 21:30:30.862335    2584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 21:30:30.898592    2584 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 21:30:30.972340    2584 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 21:30:31.396915    2584 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 21:30:27.256183   14108 main.go:141] libmachine: Creating SSH key...
	I0429 21:30:27.857643   14108 main.go:141] libmachine: Creating VM...
	I0429 21:30:27.857643   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 21:30:31.819593   14108 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 21:30:31.819593   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:31.819593   14108 main.go:141] libmachine: Using switch "Default Switch"
	I0429 21:30:31.819593   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 21:30:27.696040   13940 config.go:182] Loaded profile config "cert-expiration-004200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 21:30:27.697039   13940 config.go:182] Loaded profile config "ha-513500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 21:30:27.697039   13940 config.go:182] Loaded profile config "pause-416800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 21:30:27.698046   13940 config.go:182] Loaded profile config "running-upgrade-013100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 21:30:27.698046   13940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 21:30:30.303865    7500 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:30:30.304888    7500 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:30.304888    7500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-013100 ).state
	I0429 21:30:32.925271    7500 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:30:32.925271    7500 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:32.925271    7500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-013100 ).networkadapters[0]).ipaddresses[0]
	I0429 21:30:34.719565   13940 out.go:177] * Using the hyperv driver based on user configuration
	I0429 21:30:34.722912   13940 start.go:297] selected driver: hyperv
	I0429 21:30:34.722912   13940 start.go:901] validating driver "hyperv" against <nil>
	I0429 21:30:34.722912   13940 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 21:30:34.789977   13940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 21:30:34.790957   13940 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0429 21:30:34.790957   13940 cni.go:84] Creating CNI manager for ""
	I0429 21:30:34.790957   13940 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 21:30:34.791978   13940 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 21:30:34.792202   13940 start.go:340] cluster config:
	{Name:docker-flags-286800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-286800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 21:30:34.792202   13940 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 21:30:34.795727   13940 out.go:177] * Starting "docker-flags-286800" primary control-plane node in "docker-flags-286800" cluster
	I0429 21:30:31.755467    2584 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 21:30:31.755884    2584 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 21:30:31.812954    2584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 21:30:32.218463    2584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 21:30:34.000164   14108 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 21:30:34.000164   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:34.000430   14108 main.go:141] libmachine: Creating VHD
	I0429 21:30:34.000430   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-004200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 21:30:34.797865   13940 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 21:30:34.797865   13940 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 21:30:34.798862   13940 cache.go:56] Caching tarball of preloaded images
	I0429 21:30:34.798862   13940 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 21:30:34.798862   13940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 21:30:34.798862   13940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\docker-flags-286800\config.json ...
	I0429 21:30:34.800048   13940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\docker-flags-286800\config.json: {Name:mk6a89137622b96ea4d4c1505a33d2ae6e7616b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 21:30:34.801147   13940 start.go:360] acquireMachinesLock for docker-flags-286800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 21:30:35.825414    7500 main.go:141] libmachine: [stdout =====>] : 172.17.255.204
	
	I0429 21:30:35.825414    7500 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:35.833296    7500 main.go:141] libmachine: Using SSH client type: external
	I0429 21:30:35.833839    7500 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@172.17.255.204 -o IdentitiesOnly=yes -i C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\running-upgrade-013100\id_rsa -p 22] C:\WINDOWS\System32\OpenSSH\ssh.exe <nil>}
	I0429 21:30:35.834056    7500 main.go:141] libmachine: C:\WINDOWS\System32\OpenSSH\ssh.exe -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@172.17.255.204 -o IdentitiesOnly=yes -i C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\running-upgrade-013100\id_rsa -p 22 -f -NTL 0:localhost:8443
	W0429 21:30:35.861464    7500 kubeadm.go:404] apiserver tunnel failed: ssh command: exit status 255
	I0429 21:30:35.861537    7500 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 21:30:35.861537    7500 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 21:30:35.877005    7500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 21:30:35.894191    7500 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 21:30:35.895564    7500 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-013100" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 21:30:35.896120    7500 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-013100" cluster setting kubeconfig missing "running-upgrade-013100" context setting]
	I0429 21:30:35.896760    7500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 21:30:35.912646    7500 kapi.go:59] client config for running-upgrade-013100: &rest.Config{Host:"https://172.17.255.204:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\running-upgrade-013100/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\running-upgrade-013100/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 21:30:35.928641    7500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 21:30:35.946193    7500 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-013100"
	   kubeletExtraArgs:
	     node-ip: 172.17.255.204
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0429 21:30:35.946193    7500 kubeadm.go:1154] stopping kube-system containers ...
	I0429 21:30:35.959588    7500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 21:30:36.024003    7500 docker.go:483] Stopping containers: [017a9ae9f6db e56259b0987b bddb1d7e1e37 ccbab6111d5c 65d5578de0d6 8fda3c555598 267f4417431c 8020a82cafad fa97f9a4b5ad c82abe3722ba e52830360a07 39daec63466c c4dd2641d3b7 cac48711da06 5d7e450f90cd 29f77dd3f2cb d9a35f14729d 355d41936aca c285d2c0f308 f3cf65d0006e 1c1562339e0f 894c2eebb18d cb5929cf0a81 4716316dba36 54ae4b7e5703 8612e80805e7 28cc6f1e1553 42ae2897d0d4 11c509732f42 a04546f5b05e]
	I0429 21:30:36.034545    7500 ssh_runner.go:195] Run: docker stop 017a9ae9f6db e56259b0987b bddb1d7e1e37 ccbab6111d5c 65d5578de0d6 8fda3c555598 267f4417431c 8020a82cafad fa97f9a4b5ad c82abe3722ba e52830360a07 39daec63466c c4dd2641d3b7 cac48711da06 5d7e450f90cd 29f77dd3f2cb d9a35f14729d 355d41936aca c285d2c0f308 f3cf65d0006e 1c1562339e0f 894c2eebb18d cb5929cf0a81 4716316dba36 54ae4b7e5703 8612e80805e7 28cc6f1e1553 42ae2897d0d4 11c509732f42 a04546f5b05e
	I0429 21:30:37.916750   14108 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-004200\fix
	                          ed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1636E58E-964D-41DA-A8E1-D1A978777964
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 21:30:37.916750   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:37.916750   14108 main.go:141] libmachine: Writing magic tar header
	I0429 21:30:37.916750   14108 main.go:141] libmachine: Writing SSH key tar header
	I0429 21:30:37.925752   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-004200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-004200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 21:30:41.225908   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:30:41.225908   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:41.225908   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-004200\disk.vhd' -SizeBytes 20000MB
	I0429 21:30:41.621264    7500 ssh_runner.go:235] Completed: docker stop 017a9ae9f6db e56259b0987b bddb1d7e1e37 ccbab6111d5c 65d5578de0d6 8fda3c555598 267f4417431c 8020a82cafad fa97f9a4b5ad c82abe3722ba e52830360a07 39daec63466c c4dd2641d3b7 cac48711da06 5d7e450f90cd 29f77dd3f2cb d9a35f14729d 355d41936aca c285d2c0f308 f3cf65d0006e 1c1562339e0f 894c2eebb18d cb5929cf0a81 4716316dba36 54ae4b7e5703 8612e80805e7 28cc6f1e1553 42ae2897d0d4 11c509732f42 a04546f5b05e: (5.5866746s)
	I0429 21:30:41.635254    7500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 21:30:41.762269    7500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 21:30:41.798015    7500 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Apr 29 21:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Apr 29 21:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Apr 29 21:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Apr 29 21:22 /etc/kubernetes/scheduler.conf
	
	I0429 21:30:41.811651    7500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I0429 21:30:41.841890    7500 kubeadm.go:162] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 21:30:41.855804    7500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 21:30:41.911143    7500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I0429 21:30:41.937714    7500 kubeadm.go:162] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 21:30:41.951588    7500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 21:30:41.990316    7500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I0429 21:30:42.020097    7500 kubeadm.go:162] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 21:30:42.034695    7500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 21:30:42.070050    7500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I0429 21:30:42.086013    7500 kubeadm.go:162] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 21:30:42.101222    7500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 21:30:42.147323    7500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 21:30:42.176460    7500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 21:30:42.565386    7500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 21:30:43.802772   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:30:43.803101   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:43.803101   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM cert-expiration-004200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-004200' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0429 21:30:44.264876    7500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6994765s)
	I0429 21:30:44.264876    7500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 21:30:45.248808    7500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 21:30:45.341440    7500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 21:30:45.434545    7500 api_server.go:52] waiting for apiserver process to appear ...
	I0429 21:30:45.452404    7500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 21:30:45.473455    7500 api_server.go:72] duration metric: took 39.9076ms to wait for apiserver process to appear ...
	I0429 21:30:45.473455    7500 api_server.go:88] waiting for apiserver healthz status ...
	I0429 21:30:45.473455    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:30:47.626950   14108 main.go:141] libmachine: [stdout =====>] : 
	Name                   State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                   ----- ----------- ----------------- ------   ------             -------
	cert-expiration-004200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 21:30:47.626950   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:47.627292   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName cert-expiration-004200 -DynamicMemoryEnabled $false
	I0429 21:30:49.927549   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:30:49.927549   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:49.928380   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor cert-expiration-004200 -Count 2
	I0429 21:30:50.487544    7500 api_server.go:269] stopped: https://172.17.255.204:8443/healthz: Get "https://172.17.255.204:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 21:30:50.487899    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:30:52.903902    7500 api_server.go:269] stopped: https://172.17.255.204:8443/healthz: Get "https://172.17.255.204:8443/healthz": read tcp 172.17.240.1:55514->172.17.255.204:8443: wsarecv: An existing connection was forcibly closed by the remote host.
	I0429 21:30:52.904096    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:30:52.912944    7500 api_server.go:269] stopped: https://172.17.255.204:8443/healthz: Get "https://172.17.255.204:8443/healthz": read tcp 172.17.240.1:55515->172.17.255.204:8443: wsarecv: An existing connection was forcibly closed by the remote host.
	I0429 21:30:52.979544    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:30:52.206811   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:30:52.206811   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:52.206811   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName cert-expiration-004200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-004200\boot2docker.iso'
	I0429 21:30:54.845804   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:30:54.845804   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:54.845804   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName cert-expiration-004200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-004200\disk.vhd'
	I0429 21:30:57.988870    7500 api_server.go:269] stopped: https://172.17.255.204:8443/healthz: Get "https://172.17.255.204:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0429 21:30:57.988870    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:30:59.908098    7500 api_server.go:279] https://172.17.255.204:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 21:30:59.908544    7500 api_server.go:103] status: https://172.17.255.204:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 21:30:59.908588    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:30:59.969948    7500 api_server.go:279] https://172.17.255.204:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 21:30:59.969948    7500 api_server.go:103] status: https://172.17.255.204:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 21:30:59.986876    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:31:00.005861    7500 api_server.go:279] https://172.17.255.204:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 21:31:00.005921    7500 api_server.go:103] status: https://172.17.255.204:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 21:31:00.475345    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:31:00.486116    7500 api_server.go:279] https://172.17.255.204:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 21:31:00.486116    7500 api_server.go:103] status: https://172.17.255.204:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 21:31:00.977913    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:31:00.996218    7500 api_server.go:279] https://172.17.255.204:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 21:31:00.996566    7500 api_server.go:103] status: https://172.17.255.204:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 21:31:01.481031    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:31:01.492626    7500 api_server.go:279] https://172.17.255.204:8443/healthz returned 200:
	ok
	I0429 21:31:01.507811    7500 api_server.go:141] control plane version: v1.24.1
	I0429 21:31:01.507900    7500 api_server.go:131] duration metric: took 16.0343174s to wait for apiserver health ...
	I0429 21:31:01.507990    7500 cni.go:84] Creating CNI manager for ""
	I0429 21:31:01.507990    7500 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 21:31:01.512147    7500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 21:30:57.615094   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:30:57.615094   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:30:57.615094   14108 main.go:141] libmachine: Starting VM...
	I0429 21:30:57.616080   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM cert-expiration-004200
	I0429 21:31:00.831662   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:31:00.831662   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:00.831662   14108 main.go:141] libmachine: Waiting for host to start...
	I0429 21:31:00.831851   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:01.530333    7500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 21:31:01.559927    7500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 21:31:01.618450    7500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 21:31:01.653309    7500 system_pods.go:59] 7 kube-system pods found
	I0429 21:31:01.653309    7500 system_pods.go:61] "coredns-6d4b75cb6d-mk5hh" [659c5ee3-739c-4e46-88c8-245419c6c126] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 21:31:01.653309    7500 system_pods.go:61] "etcd-running-upgrade-013100" [84b2b2c5-b2a5-4a37-97fb-6f232ef40acf] Running
	I0429 21:31:01.653309    7500 system_pods.go:61] "kube-apiserver-running-upgrade-013100" [03f361c8-c7ef-4467-aa6c-3a6f131a1117] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 21:31:01.653309    7500 system_pods.go:61] "kube-controller-manager-running-upgrade-013100" [111b6669-bf15-4fa5-a3ef-83f73ada058d] Running
	I0429 21:31:01.653309    7500 system_pods.go:61] "kube-proxy-ps2w6" [1e6276d0-5a9f-42ef-987c-2af416bdd34d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 21:31:01.653309    7500 system_pods.go:61] "kube-scheduler-running-upgrade-013100" [9d1909c1-4732-4595-b376-899f1aca3892] Running
	I0429 21:31:01.653309    7500 system_pods.go:61] "storage-provisioner" [e80c467e-6435-420c-be28-093d2e312a10] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 21:31:01.653309    7500 system_pods.go:74] duration metric: took 34.788ms to wait for pod list to return data ...
	I0429 21:31:01.653309    7500 node_conditions.go:102] verifying NodePressure condition ...
	I0429 21:31:01.658301    7500 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0429 21:31:01.658301    7500 node_conditions.go:123] node cpu capacity is 2
	I0429 21:31:01.658301    7500 node_conditions.go:105] duration metric: took 4.9918ms to run NodePressure ...
	I0429 21:31:01.658301    7500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 21:31:02.329844    7500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 21:31:02.378894    7500 ops.go:34] apiserver oom_adj: -16
	I0429 21:31:02.378894    7500 kubeadm.go:591] duration metric: took 26.5170681s to restartPrimaryControlPlane
	I0429 21:31:02.378894    7500 kubeadm.go:393] duration metric: took 34.993567s to StartCluster
	I0429 21:31:02.378894    7500 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 21:31:02.379166    7500 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 21:31:02.381393    7500 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 21:31:02.381393    7500 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.255.204 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 21:31:02.394917    7500 out.go:177] * Verifying Kubernetes components...
	I0429 21:31:02.381393    7500 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 21:31:02.381393    7500 config.go:182] Loaded profile config "running-upgrade-013100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0429 21:31:02.395864    7500 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-013100"
	I0429 21:31:02.399277    7500 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-013100"
	W0429 21:31:02.399277    7500 addons.go:243] addon storage-provisioner should already be in state true
	I0429 21:31:02.395864    7500 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-013100"
	I0429 21:31:02.399277    7500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-013100"
	I0429 21:31:02.399277    7500 host.go:66] Checking if "running-upgrade-013100" exists ...
	I0429 21:31:02.400847    7500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-013100 ).state
	I0429 21:31:02.400847    7500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-013100 ).state
	I0429 21:31:02.417965    7500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 21:31:02.878591    7500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 21:31:02.923924    7500 api_server.go:52] waiting for apiserver process to appear ...
	I0429 21:31:02.945841    7500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 21:31:03.026287    7500 api_server.go:72] duration metric: took 644.7404ms to wait for apiserver process to appear ...
	I0429 21:31:03.026329    7500 api_server.go:88] waiting for apiserver healthz status ...
	I0429 21:31:03.026420    7500 api_server.go:253] Checking apiserver healthz at https://172.17.255.204:8443/healthz ...
	I0429 21:31:03.061141    7500 api_server.go:279] https://172.17.255.204:8443/healthz returned 200:
	ok
	I0429 21:31:03.064454    7500 api_server.go:141] control plane version: v1.24.1
	I0429 21:31:03.064515    7500 api_server.go:131] duration metric: took 38.1859ms to wait for apiserver health ...
	I0429 21:31:03.064515    7500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 21:31:03.109431    7500 system_pods.go:59] 7 kube-system pods found
	I0429 21:31:03.109499    7500 system_pods.go:61] "coredns-6d4b75cb6d-mk5hh" [659c5ee3-739c-4e46-88c8-245419c6c126] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 21:31:03.109569    7500 system_pods.go:61] "etcd-running-upgrade-013100" [84b2b2c5-b2a5-4a37-97fb-6f232ef40acf] Running
	I0429 21:31:03.109630    7500 system_pods.go:61] "kube-apiserver-running-upgrade-013100" [03f361c8-c7ef-4467-aa6c-3a6f131a1117] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 21:31:03.109677    7500 system_pods.go:61] "kube-controller-manager-running-upgrade-013100" [111b6669-bf15-4fa5-a3ef-83f73ada058d] Running
	I0429 21:31:03.109723    7500 system_pods.go:61] "kube-proxy-ps2w6" [1e6276d0-5a9f-42ef-987c-2af416bdd34d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 21:31:03.109723    7500 system_pods.go:61] "kube-scheduler-running-upgrade-013100" [9d1909c1-4732-4595-b376-899f1aca3892] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 21:31:03.109723    7500 system_pods.go:61] "storage-provisioner" [e80c467e-6435-420c-be28-093d2e312a10] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 21:31:03.109791    7500 system_pods.go:74] duration metric: took 45.2752ms to wait for pod list to return data ...
	I0429 21:31:03.109858    7500 kubeadm.go:576] duration metric: took 728.4595ms to wait for: map[apiserver:true system_pods:true]
	I0429 21:31:03.109939    7500 node_conditions.go:102] verifying NodePressure condition ...
	I0429 21:31:03.122345    7500 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0429 21:31:03.122345    7500 node_conditions.go:123] node cpu capacity is 2
	I0429 21:31:03.122345    7500 node_conditions.go:105] duration metric: took 12.4065ms to run NodePressure ...
	I0429 21:31:03.122345    7500 start.go:240] waiting for startup goroutines ...
	I0429 21:31:04.940042    7500 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:04.940042    7500 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:04.943222    7500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 21:31:04.941611    7500 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:03.338690   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:03.338690   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:03.338690   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-004200 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:06.125032   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:31:06.125129   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:04.943222    7500 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:04.946985    7500 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 21:31:04.947058    7500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 21:31:04.947157    7500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-013100 ).state
	I0429 21:31:04.949384    7500 kapi.go:59] client config for running-upgrade-013100: &rest.Config{Host:"https://172.17.255.204:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\running-upgrade-013100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\running-upgrade-013100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyDat
a:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2375ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 21:31:04.950993    7500 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-013100"
	W0429 21:31:04.950993    7500 addons.go:243] addon default-storageclass should already be in state true
	I0429 21:31:04.950993    7500 host.go:66] Checking if "running-upgrade-013100" exists ...
	I0429 21:31:04.952209    7500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-013100 ).state
	I0429 21:31:07.229652    7500 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:07.229652    7500 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:07.229652    7500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-013100 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:07.280342    7500 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:07.280342    7500 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:07.280342    7500 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 21:31:07.280342    7500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 21:31:07.280342    7500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-013100 ).state
	I0429 21:31:07.128593   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:09.519355   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:09.519355   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:09.519667   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-004200 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:09.641176    7500 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:09.641176    7500 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:09.641176    7500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-013100 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:10.065066    7500 main.go:141] libmachine: [stdout =====>] : 172.17.255.204
	
	I0429 21:31:10.065066    7500 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:10.066073    7500 sshutil.go:53] new ssh client: &{IP:172.17.255.204 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\running-upgrade-013100\id_rsa Username:docker}
	I0429 21:31:10.237267    7500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 21:31:11.782245    7500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.5447956s)
	I0429 21:31:12.500196    7500 main.go:141] libmachine: [stdout =====>] : 172.17.255.204
	
	I0429 21:31:12.500196    7500 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:12.500196    7500 sshutil.go:53] new ssh client: &{IP:172.17.255.204 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\running-upgrade-013100\id_rsa Username:docker}
	I0429 21:31:12.657248    7500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 21:31:13.069256    7500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 21:31:13.072858    7500 addons.go:505] duration metric: took 10.6913049s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 21:31:13.072858    7500 start.go:245] waiting for cluster config update ...
	I0429 21:31:13.072858    7500 start.go:254] writing updated cluster config ...
	I0429 21:31:13.088313    7500 ssh_runner.go:195] Run: rm -f paused
	I0429 21:31:13.247660    7500 start.go:600] kubectl: 1.30.0, cluster: 1.24.1 (minor skew: 6)
	I0429 21:31:13.250554    7500 out.go:177] 
	W0429 21:31:13.253919    7500 out.go:239] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.30.0, which may have incompatibilities with Kubernetes 1.24.1.
	I0429 21:31:13.257515    7500 out.go:177]   - Want kubectl v1.24.1? Try 'minikube kubectl -- get pods -A'
	I0429 21:31:13.262216    7500 out.go:177] * Done! kubectl is now configured to use "running-upgrade-013100" cluster and "default" namespace by default
	I0429 21:31:12.300199   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:31:12.300199   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:13.311369   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:15.581191   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:15.582192   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:15.582238   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-004200 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:18.259303   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:31:18.259303   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:19.271063   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:21.540946   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:21.540946   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:21.540946   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-004200 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:24.240222   14108 main.go:141] libmachine: [stdout =====>] : 
	I0429 21:31:24.240222   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:25.252890   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:27.544317   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:27.544317   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:27.544317   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-004200 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:30.276834   14108 main.go:141] libmachine: [stdout =====>] : 172.17.248.164
	
	I0429 21:31:30.276834   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:30.276834   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:32.499617   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:32.499617   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:32.499617   14108 machine.go:94] provisionDockerMachine start ...
	I0429 21:31:32.499617   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:34.772253   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:34.772253   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:34.772253   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-004200 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:37.445762   14108 main.go:141] libmachine: [stdout =====>] : 172.17.248.164
	
	I0429 21:31:37.445762   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:37.452579   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 21:31:37.453360   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.164 22 <nil> <nil>}
	I0429 21:31:37.453360   14108 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 21:31:37.582584   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 21:31:37.582584   14108 buildroot.go:166] provisioning hostname "cert-expiration-004200"
	I0429 21:31:37.582703   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:39.779332   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:39.779332   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:39.779332   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-004200 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:43.689550    2584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4704657s)
	I0429 21:31:43.704733    2584 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 21:31:43.775543    2584 out.go:177] 
	W0429 21:31:43.778152    2584 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 21:24:17 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:24:17 pause-416800 dockerd[655]: time="2024-04-29T21:24:17.627211246Z" level=info msg="Starting up"
	Apr 29 21:24:17 pause-416800 dockerd[655]: time="2024-04-29T21:24:17.628407685Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:24:17 pause-416800 dockerd[655]: time="2024-04-29T21:24:17.629693226Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.678950314Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.710969847Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711084050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711323458Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711434562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711570566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.711677570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712121184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712280589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712307690Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712320390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712425794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.712929110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.715949207Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716089412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716361621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716406122Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716527226Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716682231Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.716726732Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746659397Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746878205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746910806Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746932106Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.746952007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.747107712Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.747684831Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748291350Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748352552Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748385753Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748409654Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748434855Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748456355Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748482656Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748543658Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748565359Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748603560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748624061Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748655462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748687663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748751365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748795266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748855368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748876469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748895970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748916070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748936471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.748982472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749004473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749025474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749042574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749068375Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749099876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749121577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749142378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749224980Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.749943103Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.750222512Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.750432719Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.750857533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.751022438Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.751579656Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.752779895Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.752934200Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.753010902Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:24:17 pause-416800 dockerd[661]: time="2024-04-29T21:24:17.753037203Z" level=info msg="containerd successfully booted in 0.076654s"
	Apr 29 21:24:18 pause-416800 dockerd[655]: time="2024-04-29T21:24:18.703111290Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:24:18 pause-416800 dockerd[655]: time="2024-04-29T21:24:18.738204250Z" level=info msg="Loading containers: start."
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.054550940Z" level=info msg="Loading containers: done."
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.084144193Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.084331898Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.212016146Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:24:19 pause-416800 dockerd[655]: time="2024-04-29T21:24:19.212412557Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:24:19 pause-416800 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.766966771Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:24:51 pause-416800 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.770148978Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.771317881Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.771374881Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:24:51 pause-416800 dockerd[655]: time="2024-04-29T21:24:51.771426981Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:24:52 pause-416800 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:24:52 pause-416800 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:24:52 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:24:52 pause-416800 dockerd[1013]: time="2024-04-29T21:24:52.862162617Z" level=info msg="Starting up"
	Apr 29 21:24:52 pause-416800 dockerd[1013]: time="2024-04-29T21:24:52.864000922Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:24:52 pause-416800 dockerd[1013]: time="2024-04-29T21:24:52.869958936Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1020
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.903514914Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.937816193Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.937972294Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938035194Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938090094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938130194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938197794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938412195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938555795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938579995Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938592095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938621595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.938819196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.942763205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.942907305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943227706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943338206Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943392106Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943495907Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943514407Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943748007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943907108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943953908Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943975308Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.943991308Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.944220208Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.944714809Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.944920510Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945271211Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945390411Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945434311Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945540911Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945564911Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945599111Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945633112Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945656312Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945672712Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945687512Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945714212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945732112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945747012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945762812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.945777512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946269113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946294213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946313913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946330713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946350413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946364813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946379413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946394913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946414913Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946440913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946529814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946550814Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946632314Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946835514Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946949815Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.946972015Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947045415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947128815Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947152615Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947765017Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.947900317Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.948024517Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:24:52 pause-416800 dockerd[1020]: time="2024-04-29T21:24:52.948089117Z" level=info msg="containerd successfully booted in 0.045741s"
	Apr 29 21:24:53 pause-416800 dockerd[1013]: time="2024-04-29T21:24:53.919350476Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:24:53 pause-416800 dockerd[1013]: time="2024-04-29T21:24:53.942953531Z" level=info msg="Loading containers: start."
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.156318927Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.253702753Z" level=info msg="Loading containers: done."
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.278231910Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.278430711Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:24:54 pause-416800 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.337975249Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:24:54 pause-416800 dockerd[1013]: time="2024-04-29T21:24:54.338619251Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.421392072Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.424167378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.424922480Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.425123880Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:25:07 pause-416800 dockerd[1013]: time="2024-04-29T21:25:07.425595281Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:25:07 pause-416800 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:25:08 pause-416800 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:25:08 pause-416800 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:25:08 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:25:08 pause-416800 dockerd[1318]: time="2024-04-29T21:25:08.514335713Z" level=info msg="Starting up"
	Apr 29 21:25:08 pause-416800 dockerd[1318]: time="2024-04-29T21:25:08.515340415Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 21:25:08 pause-416800 dockerd[1318]: time="2024-04-29T21:25:08.519939626Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1324
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.560165920Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591448392Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591618393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591808593Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.591928293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.592015494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.592135494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.593715198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.593928998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594166999Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594189299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594354299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.594668300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.597981608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598187308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598468209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598584709Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598621109Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598644009Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598656209Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598818210Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598882010Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598906110Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598924910Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.598943910Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599028010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599572611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599770812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599881712Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599915212Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599939512Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599956412Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599972012Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.599990912Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600035212Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600113013Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600133913Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600172813Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600205713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600223313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600353913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600375613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600390413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600406413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600421513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600436813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600460613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600483513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600497713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600517713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600533914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600565714Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600612814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600645514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600672714Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600831214Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600938314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600958114Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.600970915Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601044115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601139915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601176415Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601569916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.601732616Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.602040817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 21:25:08 pause-416800 dockerd[1324]: time="2024-04-29T21:25:08.602127717Z" level=info msg="containerd successfully booted in 0.044753s"
	Apr 29 21:25:09 pause-416800 dockerd[1318]: time="2024-04-29T21:25:09.568563964Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.411662825Z" level=info msg="Loading containers: start."
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.615595599Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.721449045Z" level=info msg="Loading containers: done."
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.750303212Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.750790913Z" level=info msg="Daemon has completed initialization"
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.805303840Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 21:25:10 pause-416800 systemd[1]: Started Docker Application Container Engine.
	Apr 29 21:25:10 pause-416800 dockerd[1318]: time="2024-04-29T21:25:10.807829146Z" level=info msg="API listen on [::]:2376"
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.606425824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.606496821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.606570118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.607592879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.623147474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.623943043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.624244632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.626386948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.719675324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.719927914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.719949513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.720297999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760656531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760731228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760746928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:21 pause-416800 dockerd[1324]: time="2024-04-29T21:25:21.760839824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.059597766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.060044350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.060251342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.060579630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366461432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366749622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366806620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.366942615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414168301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414262998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414298596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.414404993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441025627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441145322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441160622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:22 pause-416800 dockerd[1324]: time="2024-04-29T21:25:22.441265218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.596447790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.598287877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.598455176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.598838073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844422752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844491151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844505151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.844606550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.895839891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.896657886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.896883084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:44 pause-416800 dockerd[1324]: time="2024-04-29T21:25:44.903961635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.228771091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.229506586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.229740485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:45 pause-416800 dockerd[1324]: time="2024-04-29T21:25:45.231011777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093311009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093466710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093484810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.093596411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.126263544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.129676068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.129724968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:46 pause-416800 dockerd[1324]: time="2024-04-29T21:25:46.134574903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 21:25:56 pause-416800 dockerd[1318]: time="2024-04-29T21:25:56.582781702Z" level=info msg="ignoring event" container=143a22070c0c0a7b387153dda7779cd56d9cabd3c0bffc447f7404c1b8d9913f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.586196426Z" level=info msg="shim disconnected" id=143a22070c0c0a7b387153dda7779cd56d9cabd3c0bffc447f7404c1b8d9913f namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.586362427Z" level=warning msg="cleaning up after shim disconnected" id=143a22070c0c0a7b387153dda7779cd56d9cabd3c0bffc447f7404c1b8d9913f namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.586384027Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.800643235Z" level=info msg="shim disconnected" id=0600823b5b43fef50833297d9ceed953dc608849a1f8f13fd2b4cba160ab9559 namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.801013438Z" level=warning msg="cleaning up after shim disconnected" id=0600823b5b43fef50833297d9ceed953dc608849a1f8f13fd2b4cba160ab9559 namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1324]: time="2024-04-29T21:25:56.801232940Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:25:56 pause-416800 dockerd[1318]: time="2024-04-29T21:25:56.801693743Z" level=info msg="ignoring event" container=0600823b5b43fef50833297d9ceed953dc608849a1f8f13fd2b4cba160ab9559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.260439877Z" level=info msg="Processing signal 'terminated'"
	Apr 29 21:30:32 pause-416800 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.568855275Z" level=info msg="ignoring event" container=5c8f5267a3cf475e316d3da584a95cc218c8b2a4230353e77ac850741880d1f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.569774580Z" level=info msg="shim disconnected" id=5c8f5267a3cf475e316d3da584a95cc218c8b2a4230353e77ac850741880d1f3 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.573119596Z" level=warning msg="cleaning up after shim disconnected" id=5c8f5267a3cf475e316d3da584a95cc218c8b2a4230353e77ac850741880d1f3 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.573571898Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.607146261Z" level=info msg="shim disconnected" id=fa0e7451cce167c319c12849528c38723c094cded94adf70f0471c25757ae2e9 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.607332862Z" level=warning msg="cleaning up after shim disconnected" id=fa0e7451cce167c319c12849528c38723c094cded94adf70f0471c25757ae2e9 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.607463163Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.608261866Z" level=info msg="ignoring event" container=fa0e7451cce167c319c12849528c38723c094cded94adf70f0471c25757ae2e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.623792242Z" level=info msg="ignoring event" container=f647857d7c137481de3268cb9c2654392b71c81e81058fcb7f35c1190433a6e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.624257044Z" level=info msg="shim disconnected" id=f647857d7c137481de3268cb9c2654392b71c81e81058fcb7f35c1190433a6e1 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.625775252Z" level=warning msg="cleaning up after shim disconnected" id=f647857d7c137481de3268cb9c2654392b71c81e81058fcb7f35c1190433a6e1 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.625906352Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.633692290Z" level=info msg="ignoring event" container=becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.634674195Z" level=info msg="shim disconnected" id=becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.634789095Z" level=warning msg="cleaning up after shim disconnected" id=becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.634854896Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.651124375Z" level=info msg="ignoring event" container=45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.655334295Z" level=info msg="shim disconnected" id=45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.656874203Z" level=warning msg="cleaning up after shim disconnected" id=45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.657076404Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.658227909Z" level=info msg="ignoring event" container=9baf3eb6e35e2a7ac754a8b432fcf966fb71e817d62f3bfe5d7e5f3ac09d6a9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.660480020Z" level=info msg="shim disconnected" id=9baf3eb6e35e2a7ac754a8b432fcf966fb71e817d62f3bfe5d7e5f3ac09d6a9f namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.660635321Z" level=warning msg="cleaning up after shim disconnected" id=9baf3eb6e35e2a7ac754a8b432fcf966fb71e817d62f3bfe5d7e5f3ac09d6a9f namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.660767522Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.682335926Z" level=info msg="shim disconnected" id=93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.682727028Z" level=warning msg="cleaning up after shim disconnected" id=93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.690600767Z" level=info msg="ignoring event" container=93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.690728567Z" level=info msg="ignoring event" container=fb1fdb294cb6a0cad43ec59e1aef0133cc72405900a4da100246efb834b9b250 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.690769567Z" level=info msg="ignoring event" container=6804050c7ca2aa5b7de1e60b9c79de1967a6fb06aad45f5ae3fba7964b22edda module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.690408666Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.694207284Z" level=info msg="shim disconnected" id=6804050c7ca2aa5b7de1e60b9c79de1967a6fb06aad45f5ae3fba7964b22edda namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.694956788Z" level=warning msg="cleaning up after shim disconnected" id=6804050c7ca2aa5b7de1e60b9c79de1967a6fb06aad45f5ae3fba7964b22edda namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.695260289Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.702901026Z" level=info msg="shim disconnected" id=fb1fdb294cb6a0cad43ec59e1aef0133cc72405900a4da100246efb834b9b250 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.703184628Z" level=warning msg="cleaning up after shim disconnected" id=fb1fdb294cb6a0cad43ec59e1aef0133cc72405900a4da100246efb834b9b250 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.703319028Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1318]: time="2024-04-29T21:30:32.728201349Z" level=info msg="ignoring event" container=7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.729420255Z" level=info msg="shim disconnected" id=7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.729627256Z" level=warning msg="cleaning up after shim disconnected" id=7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5 namespace=moby
	Apr 29 21:30:32 pause-416800 dockerd[1324]: time="2024-04-29T21:30:32.729930858Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:37 pause-416800 dockerd[1318]: time="2024-04-29T21:30:37.420909147Z" level=info msg="ignoring event" container=c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:37 pause-416800 dockerd[1324]: time="2024-04-29T21:30:37.425478770Z" level=info msg="shim disconnected" id=c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01 namespace=moby
	Apr 29 21:30:37 pause-416800 dockerd[1324]: time="2024-04-29T21:30:37.425705171Z" level=warning msg="cleaning up after shim disconnected" id=c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01 namespace=moby
	Apr 29 21:30:37 pause-416800 dockerd[1324]: time="2024-04-29T21:30:37.425728471Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.438113999Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1
	Apr 29 21:30:42 pause-416800 dockerd[1324]: time="2024-04-29T21:30:42.489671939Z" level=info msg="shim disconnected" id=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1 namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.489674839Z" level=info msg="ignoring event" container=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 21:30:42 pause-416800 dockerd[1324]: time="2024-04-29T21:30:42.489759140Z" level=warning msg="cleaning up after shim disconnected" id=3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1 namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1324]: time="2024-04-29T21:30:42.489777640Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.568187753Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.569158555Z" level=info msg="Daemon shutdown complete"
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.569301356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 21:30:42 pause-416800 dockerd[1318]: time="2024-04-29T21:30:42.569338456Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 21:30:43 pause-416800 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 21:30:43 pause-416800 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:30:43 pause-416800 systemd[1]: docker.service: Consumed 9.301s CPU time.
	Apr 29 21:30:43 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:30:43 pause-416800 dockerd[4608]: time="2024-04-29T21:30:43.658828498Z" level=info msg="Starting up"
	Apr 29 21:31:43 pause-416800 dockerd[4608]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 21:31:43 pause-416800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 21:31:43 pause-416800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 21:31:43 pause-416800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 21:31:43.779435    2584 out.go:239] * 
	W0429 21:31:43.780659    2584 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 21:31:43.785834    2584 out.go:177] 
	I0429 21:31:42.453376   14108 main.go:141] libmachine: [stdout =====>] : 172.17.248.164
	
	I0429 21:31:42.453376   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:42.459708   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 21:31:42.460386   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.164 22 <nil> <nil>}
	I0429 21:31:42.460386   14108 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-004200 && echo "cert-expiration-004200" | sudo tee /etc/hostname
	I0429 21:31:42.619459   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-004200
	
	I0429 21:31:42.619515   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:44.911474   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:44.911474   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:44.911474   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-004200 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:47.732930   14108 main.go:141] libmachine: [stdout =====>] : 172.17.248.164
	
	I0429 21:31:47.733006   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:47.741047   14108 main.go:141] libmachine: Using SSH client type: native
	I0429 21:31:47.741577   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeca1c0] 0xeccda0 <nil>  [] 0s} 172.17.248.164 22 <nil> <nil>}
	I0429 21:31:47.741692   14108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-004200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-004200/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-004200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 21:31:47.900421   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 21:31:47.900526   14108 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 21:31:47.900526   14108 buildroot.go:174] setting up certificates
	I0429 21:31:47.900526   14108 provision.go:84] configureAuth start
	I0429 21:31:47.900526   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:50.210206   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:50.210206   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:50.210206   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-004200 ).networkadapters[0]).ipaddresses[0]
	I0429 21:31:52.988874   14108 main.go:141] libmachine: [stdout =====>] : 172.17.248.164
	
	I0429 21:31:52.988874   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:52.988874   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-004200 ).state
	I0429 21:31:55.271508   14108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 21:31:55.271508   14108 main.go:141] libmachine: [stderr =====>] : 
	I0429 21:31:55.271606   14108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-004200 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Apr 29 21:32:43 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:32:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363'"
	Apr 29 21:32:43 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:32:43Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 29 21:32:43 pause-416800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 21:32:43 pause-416800 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 29 21:32:44 pause-416800 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 29 21:32:44 pause-416800 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 21:32:44 pause-416800 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 21:32:44 pause-416800 dockerd[5095]: time="2024-04-29T21:32:44.226588851Z" level=info msg="Starting up"
	Apr 29 21:33:44 pause-416800 dockerd[5095]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="error getting RW layer size for container ID 'c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c9c85907cb23af1857ddf7b3f30990b344d92a842c9052f941dc882df1a26b01'"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="error getting RW layer size for container ID '93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '93cea573859cc8717c47bb2ee3054fccb8839fbe7dc1532bb92e320e86aaed84'"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="error getting RW layer size for container ID '45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '45e458467e2f8899d66563c16f84d4ea23c382ef9b26bd3ddab79bf8fe00284e'"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="error getting RW layer size for container ID 'becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'becfe759d20fe5d5c94eea4c1f2285c485e05e1a82ec7b1d3d1dc1add6cab363'"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="error getting RW layer size for container ID '7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7c94d0a5175a98f526730967f63b01c509f1c968a005ab037e7bba848bb18ab5'"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="error getting RW layer size for container ID '3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:33:44 pause-416800 cri-dockerd[1223]: time="2024-04-29T21:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3c9afe90035d36ad303605a5cb8f3ff334ba3371b985055dbfc4ca74c370e4e1'"
	Apr 29 21:33:44 pause-416800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 21:33:44 pause-416800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 21:33:44 pause-416800 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T21:33:46Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.911644] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.121846] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.641491] systemd-fstab-generator[978]: Ignoring "noauto" option for root device
	[  +0.252084] systemd-fstab-generator[990]: Ignoring "noauto" option for root device
	[  +0.250663] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +2.960223] systemd-fstab-generator[1176]: Ignoring "noauto" option for root device
	[  +0.251933] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.227597] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.330499] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +0.119561] kauditd_printk_skb: 183 callbacks suppressed
	[Apr29 21:25] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	[  +0.121291] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.879719] systemd-fstab-generator[1509]: Ignoring "noauto" option for root device
	[  +8.450796] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.117009] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.600766] systemd-fstab-generator[2130]: Ignoring "noauto" option for root device
	[  +0.147008] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.670623] systemd-fstab-generator[2368]: Ignoring "noauto" option for root device
	[  +0.213963] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.152095] kauditd_printk_skb: 88 callbacks suppressed
	[Apr29 21:30] systemd-fstab-generator[4180]: Ignoring "noauto" option for root device
	[  +1.050430] systemd-fstab-generator[4219]: Ignoring "noauto" option for root device
	[  +0.427533] systemd-fstab-generator[4244]: Ignoring "noauto" option for root device
	[  +0.418628] systemd-fstab-generator[4258]: Ignoring "noauto" option for root device
	[  +5.437057] kauditd_printk_skb: 87 callbacks suppressed
	
	
	==> kernel <==
	 21:34:44 up 11 min,  0 users,  load average: 0.03, 0.18, 0.16
	Linux pause-416800 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 29 21:34:35 pause-416800 kubelet[2137]: E0429 21:34:35.043579    2137 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m3.504161065s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 29 21:34:39 pause-416800 kubelet[2137]: I0429 21:34:39.918811    2137 status_manager.go:853] "Failed to get status for pod" podUID="bad7b13bef35f01ace1675e3a6f48c8b" pod="kube-system/kube-apiserver-pause-416800" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-416800\": dial tcp 172.17.243.17:8443: connect: connection refused"
	Apr 29 21:34:40 pause-416800 kubelet[2137]: E0429 21:34:40.045098    2137 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m8.505682032s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 29 21:34:40 pause-416800 kubelet[2137]: E0429 21:34:40.248938    2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-416800?timeout=10s\": dial tcp 172.17.243.17:8443: connect: connection refused" interval="7s"
	Apr 29 21:34:40 pause-416800 kubelet[2137]: E0429 21:34:40.748475    2137 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"pause-416800\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-416800?resourceVersion=0&timeout=10s\": dial tcp 172.17.243.17:8443: connect: connection refused"
	Apr 29 21:34:40 pause-416800 kubelet[2137]: E0429 21:34:40.749588    2137 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"pause-416800\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-416800?timeout=10s\": dial tcp 172.17.243.17:8443: connect: connection refused"
	Apr 29 21:34:40 pause-416800 kubelet[2137]: E0429 21:34:40.750746    2137 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"pause-416800\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-416800?timeout=10s\": dial tcp 172.17.243.17:8443: connect: connection refused"
	Apr 29 21:34:40 pause-416800 kubelet[2137]: E0429 21:34:40.751715    2137 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"pause-416800\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-416800?timeout=10s\": dial tcp 172.17.243.17:8443: connect: connection refused"
	Apr 29 21:34:40 pause-416800 kubelet[2137]: E0429 21:34:40.752861    2137 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"pause-416800\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-416800?timeout=10s\": dial tcp 172.17.243.17:8443: connect: connection refused"
	Apr 29 21:34:40 pause-416800 kubelet[2137]: E0429 21:34:40.752907    2137 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.514349    2137 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.514446    2137 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: I0429 21:34:44.514477    2137 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.514609    2137 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.514701    2137 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.516694    2137 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.516836    2137 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.516950    2137 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.516776    2137 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.517471    2137 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.516733    2137 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.517610    2137 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.519621    2137 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.520168    2137 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 29 21:34:44 pause-416800 kubelet[2137]: E0429 21:34:44.520828    2137 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:31:57.428382   12112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0429 21:32:43.935388   12112 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 21:32:43.975748   12112 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 21:32:44.010261   12112 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 21:32:44.046775   12112 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 21:32:44.085610   12112 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 21:32:44.118555   12112 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 21:33:44.250708   12112 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-416800 -n pause-416800
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-416800 -n pause-416800: exit status 2 (12.4231798s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:34:45.547686    9032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-416800" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (511.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10800.554s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-862000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.30.0
E0429 21:40:24.033634   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestCertExpiration (11m46s)
	TestCertOptions (4m35s)
	TestNetworkPlugins (8m20s)
	TestStartStop (22m41s)
	TestStartStop/group/embed-certs (1m3s)
	TestStartStop/group/embed-certs/serial (1m3s)
	TestStartStop/group/embed-certs/serial/FirstStart (1m3s)
	TestStartStop/group/old-k8s-version (3m21s)
	TestStartStop/group/old-k8s-version/serial (3m21s)
	TestStartStop/group/old-k8s-version/serial/FirstStart (3m21s)

                                                
                                                
goroutine 2328 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0005feea0, 0xc0008b1bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007c8288, {0x463d540, 0x2a, 0x2a}, {0x2308526?, 0x14806f?, 0x4660760?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00086ba40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00086ba40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000070300)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 153 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00097a710, 0x3c)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1da4be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002138300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00097a740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008c64d0, {0x3272400, 0xc000aa3560}, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008c64d0, 0x3b9aca00, 0x0, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 188
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2255 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020a3ba0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020a3ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0020a3ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0020a3ba0, 0xc00035a080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2230
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2230 [chan receive, 8 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0025ca680, 0xc0022c6060)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2046
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 155 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 30 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 42
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2217 [chan receive, 24 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020a36c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020a36c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0020a36c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0020a36c0, 0xc00252c240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2213
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2236 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0025cb380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0025cb380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0025cb380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0025cb380, 0xc0024e8280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2230
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 691 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffc83c34de0?, {0xc00006b808?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x570, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0026586c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000aea000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000aea000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0005feb60, 0xc000aea000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0005feb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0005feb60, 0x2d1b018)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2387 [syscall, locked to thread]:
syscall.SyscallN(0x10?, {0xc002f2fb20?, 0xa7ea5?, 0x46edbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002f2fb88?, 0xc002f2fb80?, 0x9fdd6?, 0x46edbc0?, 0xc002f2fc08?, 0x92985?, 0x23325f70a28?, 0xc000050667?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x4c8, {0xc0008abd54?, 0x2ac, 0x14417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002814c88?, {0xc0008abd54?, 0xc002f2fd78?, 0x2000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002814c88, {0xc0008abd54, 0x2ac, 0x2ac})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0008a81d0, {0xc0008abd54?, 0x0?, 0xe16?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0026661e0, {0x3270fc0, 0xc0000a6dc8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3271100, 0xc0026661e0}, {0x3270fc0, 0xc0000a6dc8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc00275e580?, {0x3271100, 0xc0026661e0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0x3271100?, 0xc0026661e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3271100, 0xc0026661e0}, {0x3271080, 0xc0008a81d0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x2d1b050?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2273
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2213 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0020a29c0, 0x2d1b318)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2122
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 154 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3295e00, 0xc000054420}, 0xc00210bf50, 0xc00210bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3295e00, 0xc000054420}, 0xa0?, 0xc00210bf50, 0xc00210bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3295e00?, 0xc000054420?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00210bfd0?, 0x21e404?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 188
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 935 [chan receive, 151 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00289aa40, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 926
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1263 [chan send, 143 minutes]:
os/exec.(*Cmd).watchCtx(0xc0022cdb80, 0xc0026cc8a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 863
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 187 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002138420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 178
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 188 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00097a740, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 178
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 929 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 928
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 748 [IO wait, 162 minutes]:
internal/poll.runtime_pollWait(0x2336b903718, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000580408?, 0x0?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0022a3ba0, 0xc002167bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0022a3b88, 0x30c, {0xc0007b30e0?, 0x0?, 0x0?}, 0xc000580008?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0022a3b88, 0xc002167d90)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0022a3b88)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc000b883e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000b883e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00077a0f0, {0x3288ea0, 0xc000b883e0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc00077a0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00093e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 745
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2232 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0025ca9c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0025ca9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0025ca9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0025ca9c0, 0xc0024e8080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2230
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2267 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0xc0025bfb58?, {0xc0025bfb20?, 0xa7ea5?, 0x46edbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0027b604d?, 0xc0025bfb80?, 0x9fdd6?, 0x46edbc0?, 0xc0025bfc08?, 0x9281b?, 0x23325f70108?, 0x35?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x650, {0xc0008d653a?, 0x2c6, 0xc0008d6400?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002153188?, {0xc0008d653a?, 0xcc1be?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002153188, {0xc0008d653a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0008a80f0, {0xc0008d653a?, 0xc002219dc0?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00236c3f0, {0x3270fc0, 0xc0000a6d48})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3271100, 0xc00236c3f0}, {0x3270fc0, 0xc0000a6d48}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0025bfe78?, {0x3271100, 0xc00236c3f0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0025bff38?, {0x3271100?, 0xc00236c3f0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3271100, 0xc00236c3f0}, {0x3271080, 0xc0008a80f0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002548900?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 691
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2237 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0025cb520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0025cb520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0025cb520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0025cb520, 0xc0024e8300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2230
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 928 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3295e00, 0xc000054420}, 0xc002873f50, 0xc002873f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3295e00, 0xc000054420}, 0x0?, 0xc002873f50, 0xc002873f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3295e00?, 0xc000054420?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x21e3a5?, 0xc000a298c0?, 0xc0003efb00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 935
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2214 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0020a2b60, {0x22adef5?, 0x0?}, 0xc002506000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0020a2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0020a2b60, 0xc00252c180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2213
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2234 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0025cb040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0025cb040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0025cb040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0025cb040, 0xc0024e8180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2230
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 692 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffc83c34de0?, {0xc0026c39a8?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x4ec, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002767050)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0022d06e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0022d06e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0005ff1e0, 0xc0022d06e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc0005ff1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:131 +0x576
testing.tRunner(0xc0005ff1e0, 0x2d1b010)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2272 [chan receive]:
testing.(*T).Run(0xc0008c2d00, {0x22b7546?, 0x60400000004?}, 0xc0024e8600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0008c2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0008c2d00, 0xc0024e8500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2219
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2238 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0025cb6c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0025cb6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0025cb6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0025cb6c0, 0xc0024e8380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2230
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2353 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0008c24e0, {0x22b7546?, 0x60400000004?}, 0xc002506100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0008c24e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0008c24e0, 0xc002506000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2214
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2122 [chan receive, 24 minutes]:
testing.(*T).Run(0xc0020a2ea0, {0x22ac9f1?, 0x1d7333?}, 0x2d1b318)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0020a2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0020a2ea0, 0x2d1b140)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2233 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0025caea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0025caea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0025caea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0025caea0, 0xc0024e8100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2230
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 927 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00289aa10, 0x36)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1da4be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000888b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00289aa40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008f3110, {0x3272400, 0xc002093da0}, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008f3110, 0x3b9aca00, 0x0, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 935
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2350 [syscall, locked to thread]:
syscall.SyscallN(0x2336b915290?, {0xc002191b20?, 0xa7ea5?, 0x46edbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x2336b915290?, 0xc002191b80?, 0x9fdd6?, 0x46edbc0?, 0xc002191c08?, 0x92985?, 0x23325f70a28?, 0x2004d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5c0, {0xc002704243?, 0x5bd, 0x14417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002815188?, {0xc002704243?, 0xc5170?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002815188, {0xc002704243, 0x5bd, 0x5bd})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6e98, {0xc002704243?, 0xc0028dd180?, 0x205?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002666090, {0x3270fc0, 0xc0008a81c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3271100, 0xc002666090}, {0x3270fc0, 0xc0008a81c0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002191e78?, {0x3271100, 0xc002666090})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002191f38?, {0x3271100?, 0xc002666090?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3271100, 0xc002666090}, {0x3271080, 0xc0000a6e98}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0026cc240?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 692
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 924 [chan send, 151 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a29a20, 0xc0003efda0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 923
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 934 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000888c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 926
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2351 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0xc0001c30a0?, {0xc002083b20?, 0xa7ea5?, 0x46edbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x3271141?, 0xc002083b80?, 0x9fdd6?, 0x46edbc0?, 0xc002083c08?, 0x9281b?, 0x88ba6?, 0xc0020b4041?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3c4, {0xc0008d6d3a?, 0x2c6, 0xc0008d6c00?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002815688?, {0xc0008d6d3a?, 0xcc1be?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002815688, {0xc0008d6d3a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6f00, {0xc0008d6d3a?, 0xc002083d98?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0026660c0, {0x3270fc0, 0xc002274588})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3271100, 0xc0026660c0}, {0x3270fc0, 0xc002274588}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3271100, 0xc0026660c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x90c36?, {0x3271100?, 0xc0026660c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3271100, 0xc0026660c0}, {0x3271080, 0xc0000a6f00}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0025489c0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 692
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2370 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffc83c34de0?, {0xc0023bdae0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x5f8, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0027665d0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00275e000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00275e000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008c2820, 0xc00275e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x3295c40?, 0xc00045a000?}, 0xc0008c2820, {0xc000aee018?, 0x6630130c?}, {0xc021638ca8?, 0xc0023bdf60?}, {0x1d7333?, 0x128d6f?}, {0xc000000600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0008c2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0008c2820, 0xc002506100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2353
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2388 [select]:
os/exec.(*Cmd).watchCtx(0xc0022d0000, 0xc0026ee2a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2273
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2046 [chan receive, 8 minutes]:
testing.(*T).Run(0xc0020a21a0, {0x22ac9f1?, 0xff48d?}, 0xc0022c6060)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0020a21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0020a21a0, 0x2d1b0f8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2216 [chan receive, 24 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020a3040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020a3040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0020a3040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0020a3040, 0xc00252c200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2213
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2231 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0025ca820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0025ca820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0025ca820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0025ca820, 0xc0024e8000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2230
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2273 [syscall, locked to thread]:
syscall.SyscallN(0x7ffc83c34de0?, {0xc002f2bae0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x3dc, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00255c7e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0022d0000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0022d0000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008c2ea0, 0xc0022d0000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x3295c40?, 0xc000916070?}, 0xc0008c2ea0, {0xc00276c018?, 0x66301397?}, {0xc00d2a1630?, 0xc002f2bf60?}, {0x1d7333?, 0x128d6f?}, {0xc0001c4f00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0008c2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0008c2ea0, 0xc0024e8600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2272
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2386 [syscall, locked to thread]:
syscall.SyscallN(0x1da52a0?, {0xc0028f5b20?, 0xa7ea5?, 0x46edbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0028f5b98?, 0xc0028f5b80?, 0x9fdd6?, 0x46edbc0?, 0xc0028f5c08?, 0x9281b?, 0x23325f70eb8?, 0xc0028f5b35?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7e0, {0xc000a2f9fb?, 0x205, 0x14417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002814288?, {0xc000a2f9fb?, 0x0?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002814288, {0xc000a2f9fb, 0x205, 0x205})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0008a8190, {0xc000a2f9fb?, 0x2336b5e8a38?, 0x6e?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0026661b0, {0x3270fc0, 0xc00035e008})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3271100, 0xc0026661b0}, {0x3270fc0, 0xc00035e008}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3271100, 0xc0026661b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x90c36?, {0x3271100?, 0xc0026661b0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3271100, 0xc0026661b0}, {0x3271080, 0xc0008a8190}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x2d1b010?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2273
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2215 [chan receive, 24 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020a2d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020a2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0020a2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0020a2d00, 0xc00252c1c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2213
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2218 [chan receive, 24 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020a3860)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020a3860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0020a3860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0020a3860, 0xc00252c280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2213
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2219 [chan receive]:
testing.(*T).Run(0xc0020a3a00, {0x22adef5?, 0x0?}, 0xc0024e8500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0020a3a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0020a3a00, 0xc00252c300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2213
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2266 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc000ab5b20?, 0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x657473756c632022?, 0x61657243202a0a72?, 0x70796820676e6974?, 0x28204d5620767265?, 0x202c323d73555043?, 0x323d79726f6d654d?, 0x44202c424d383430?, 0x303030323d6b7369?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x46c, {0xc002705a9a?, 0x566, 0x14417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002152c88?, {0xc002705a9a?, 0xc000ab5c50?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002152c88, {0xc002705a9a, 0x566, 0x566})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0008a80d8, {0xc002705a9a?, 0x0?, 0x23c?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00236c390, {0x3270fc0, 0xc0022742f8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3271100, 0xc00236c390}, {0x3270fc0, 0xc0022742f8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x22adedd?, {0x3271100, 0xc00236c390})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x1d83a0?, {0x3271100?, 0xc00236c390?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3271100, 0xc00236c390}, {0x3271080, 0xc0008a80d8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x2d1b110?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 691
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2352 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0022d06e0, 0xc0026ee660)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 692
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2268 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000aea000, 0xc0026cc180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 691
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2235 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0006bb180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0025cb1e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0025cb1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0025cb1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0025cb1e0, 0xc0024e8200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2230
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2371 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc002097b20?, 0xa7ea5?, 0x46edbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002097b90?, 0xc002097b80?, 0x9fdd6?, 0x46edbc0?, 0xc002097c08?, 0x92985?, 0x23325f70598?, 0x108b4d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6e0, {0xc000a7ba07?, 0x5f9, 0xc000a7b800?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002814508?, {0xc000a7ba07?, 0x1fbf960?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002814508, {0xc000a7ba07, 0x5f9, 0x5f9})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6d88, {0xc000a7ba07?, 0xc000498a00?, 0x207?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00254a0f0, {0x3270fc0, 0xc002274020})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3271100, 0xc00254a0f0}, {0x3270fc0, 0xc002274020}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0008be178?, {0x3271100, 0xc00254a0f0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x1c412e?, {0x3271100?, 0xc00254a0f0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3271100, 0xc00254a0f0}, {0x3271080, 0xc0000a6d88}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2370
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2372 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0020b9b20?, 0xa7ea5?, 0x46edbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0020b9b98?, 0xc0020b9b80?, 0x9fdd6?, 0x46edbc0?, 0xc0020b9c08?, 0x92985?, 0x23325f70a28?, 0x3295a67?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x434, {0xc0003e9cec?, 0x314, 0x14417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002814a08?, {0xc0003e9cec?, 0x3271860?, 0x2000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002814a08, {0xc0003e9cec, 0x314, 0x314})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6dd8, {0xc0003e9cec?, 0x9281b?, 0x1000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00254a120, {0x3270fc0, 0xc0008a8030})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3271100, 0xc00254a120}, {0x3270fc0, 0xc0008a8030}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000003e50?, {0x3271100, 0xc00254a120})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00246aba0?, {0x3271100?, 0xc00254a120?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3271100, 0xc00254a120}, {0x3271080, 0xc0000a6dd8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000003e00?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2370
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2373 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc00275e000, 0xc002548180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2370
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                    

Test pass (147/198)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.27
4 TestDownloadOnly/v1.20.0/preload-exists 0.09
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.33
9 TestDownloadOnly/v1.20.0/DeleteAll 1.52
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.46
12 TestDownloadOnly/v1.30.0/json-events 11.4
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.32
18 TestDownloadOnly/v1.30.0/DeleteAll 1.42
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 1.43
21 TestBinaryMirror 7.63
22 TestOffline 446.07
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.32
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.32
27 TestAddons/Setup 406.02
30 TestAddons/parallel/Ingress 71.12
31 TestAddons/parallel/InspektorGadget 26.74
32 TestAddons/parallel/MetricsServer 22.79
33 TestAddons/parallel/HelmTiller 30.36
35 TestAddons/parallel/CSI 118.31
36 TestAddons/parallel/Headlamp 41.39
37 TestAddons/parallel/CloudSpanner 21.45
38 TestAddons/parallel/LocalPath 32
39 TestAddons/parallel/NvidiaDevicePlugin 22.53
40 TestAddons/parallel/Yakd 5.06
43 TestAddons/serial/GCPAuth/Namespaces 0.37
44 TestAddons/StoppedEnableDisable 55.6
47 TestDockerFlags 409.13
48 TestForceSystemdFlag 258.82
49 TestForceSystemdEnv 420.98
56 TestErrorSpam/start 17.98
57 TestErrorSpam/status 37.7
58 TestErrorSpam/pause 23.6
59 TestErrorSpam/unpause 23.95
60 TestErrorSpam/stop 63.49
63 TestFunctional/serial/CopySyncFile 0.04
64 TestFunctional/serial/StartWithProxy 247.38
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 130.78
67 TestFunctional/serial/KubeContext 0.16
68 TestFunctional/serial/KubectlGetPods 0.26
71 TestFunctional/serial/CacheCmd/cache/add_remote 26.85
72 TestFunctional/serial/CacheCmd/cache/add_local 11.97
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.3
74 TestFunctional/serial/CacheCmd/cache/list 0.3
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.64
76 TestFunctional/serial/CacheCmd/cache/cache_reload 36.98
77 TestFunctional/serial/CacheCmd/cache/delete 0.6
78 TestFunctional/serial/MinikubeKubectlCmd 0.56
80 TestFunctional/serial/ExtraConfig 130.1
81 TestFunctional/serial/ComponentHealth 0.2
82 TestFunctional/serial/LogsCmd 8.8
83 TestFunctional/serial/LogsFileCmd 11
84 TestFunctional/serial/InvalidService 21.37
90 TestFunctional/parallel/StatusCmd 41.84
94 TestFunctional/parallel/ServiceCmdConnect 36.88
95 TestFunctional/parallel/AddonsCmd 0.85
96 TestFunctional/parallel/PersistentVolumeClaim 52.23
98 TestFunctional/parallel/SSHCmd 25.05
99 TestFunctional/parallel/CpCmd 61.93
100 TestFunctional/parallel/MySQL 65.4
101 TestFunctional/parallel/FileSync 11.73
102 TestFunctional/parallel/CertSync 66.33
106 TestFunctional/parallel/NodeLabels 0.63
108 TestFunctional/parallel/NonActiveRuntimeDisabled 12.3
110 TestFunctional/parallel/License 4.31
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 10.62
113 TestFunctional/parallel/Version/short 0.28
114 TestFunctional/parallel/Version/components 8.24
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.08
118 TestFunctional/parallel/ImageCommands/ImageListShort 7.74
119 TestFunctional/parallel/ImageCommands/ImageListTable 7.65
120 TestFunctional/parallel/ImageCommands/ImageListJson 7.58
121 TestFunctional/parallel/ImageCommands/ImageListYaml 7.59
122 TestFunctional/parallel/ImageCommands/ImageBuild 27.42
123 TestFunctional/parallel/ImageCommands/Setup 5.12
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 24.09
125 TestFunctional/parallel/ProfileCmd/profile_not_create 11.49
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ProfileCmd/profile_list 10.8
133 TestFunctional/parallel/ProfileCmd/profile_json_output 10.98
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 20.39
135 TestFunctional/parallel/ServiceCmd/DeployApp 16.46
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 28.71
137 TestFunctional/parallel/ServiceCmd/List 14.49
138 TestFunctional/parallel/ServiceCmd/JSONOutput 14.43
139 TestFunctional/parallel/DockerEnv/powershell 48.54
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 10.29
142 TestFunctional/parallel/ImageCommands/ImageRemove 17.83
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 19.95
146 TestFunctional/parallel/UpdateContextCmd/no_changes 2.5
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.64
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.53
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 11.8
150 TestFunctional/delete_addon-resizer_images 0.5
151 TestFunctional/delete_my-image_image 0.18
152 TestFunctional/delete_minikube_cached_images 0.19
156 TestMultiControlPlane/serial/StartCluster 724.16
157 TestMultiControlPlane/serial/DeployApp 13.39
159 TestMultiControlPlane/serial/AddWorkerNode 257.75
160 TestMultiControlPlane/serial/NodeLabels 0.2
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 28.96
162 TestMultiControlPlane/serial/CopyFile 642.88
166 TestImageBuild/serial/Setup 202.03
167 TestImageBuild/serial/NormalBuild 9.9
168 TestImageBuild/serial/BuildWithBuildArg 9.39
169 TestImageBuild/serial/BuildWithDockerIgnore 7.9
170 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.75
174 TestJSONOutput/start/Command 244.11
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 8.06
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 8.09
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 40.13
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 1.59
202 TestMainNoArgs 0.27
203 TestMinikubeProfile 526.05
206 TestMountStart/serial/StartWithMountFirst 159.16
207 TestMountStart/serial/VerifyMountFirst 9.7
208 TestMountStart/serial/StartWithMountSecond 159.91
209 TestMountStart/serial/VerifyMountSecond 9.69
210 TestMountStart/serial/DeleteFirst 27.96
211 TestMountStart/serial/VerifyMountPostDelete 9.66
212 TestMountStart/serial/Stop 27.12
220 TestMultiNode/serial/MultiNodeLabels 0.19
221 TestMultiNode/serial/ProfileList 10.04
228 TestPreload 533.41
229 TestScheduledStopWindows 339.36
234 TestRunningBinaryUpgrade 1006.79
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.47
241 TestStoppedBinaryUpgrade/Setup 0.91
242 TestStoppedBinaryUpgrade/Upgrade 865.9
251 TestPause/serial/Start 509.77
253 TestStoppedBinaryUpgrade/MinikubeLogs 10.18
x
+
TestDownloadOnly/v1.20.0/json-events (17.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-029800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-029800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (17.2731711s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-029800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-029800: exit status 85 (325.2531ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-029800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |          |
	|         | -p download-only-029800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 18:40:38
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 18:40:38.152344    9720 out.go:291] Setting OutFile to fd 616 ...
	I0429 18:40:38.153296    9720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:40:38.153296    9720 out.go:304] Setting ErrFile to fd 620...
	I0429 18:40:38.153296    9720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 18:40:38.166992    9720 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0429 18:40:38.181379    9720 out.go:298] Setting JSON to true
	I0429 18:40:38.184873    9720 start.go:129] hostinfo: {"hostname":"minikube6","uptime":17977,"bootTime":1714398060,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 18:40:38.184873    9720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 18:40:38.191672    9720 out.go:97] [download-only-029800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 18:40:38.198091    9720 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	W0429 18:40:38.192129    9720 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0429 18:40:38.192129    9720 notify.go:220] Checking for updates...
	I0429 18:40:38.204762    9720 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 18:40:38.207690    9720 out.go:169] MINIKUBE_LOCATION=18774
	I0429 18:40:38.210893    9720 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0429 18:40:38.214586    9720 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 18:40:38.217377    9720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:40:43.721094    9720 out.go:97] Using the hyperv driver based on user configuration
	I0429 18:40:43.721269    9720 start.go:297] selected driver: hyperv
	I0429 18:40:43.721269    9720 start.go:901] validating driver "hyperv" against <nil>
	I0429 18:40:43.721707    9720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 18:40:43.775817    9720 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0429 18:40:43.776305    9720 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 18:40:43.777518    9720 cni.go:84] Creating CNI manager for ""
	I0429 18:40:43.777655    9720 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 18:40:43.777935    9720 start.go:340] cluster config:
	{Name:download-only-029800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-029800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:40:43.779836    9720 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:40:43.782884    9720 out.go:97] Downloading VM boot image ...
	I0429 18:40:43.782884    9720 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 18:40:47.895244    9720 out.go:97] Starting "download-only-029800" primary control-plane node in "download-only-029800" cluster
	I0429 18:40:47.896156    9720 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 18:40:47.941633    9720 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 18:40:47.942556    9720 cache.go:56] Caching tarball of preloaded images
	I0429 18:40:47.943157    9720 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 18:40:47.945934    9720 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 18:40:47.945934    9720 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 18:40:48.015211    9720 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 18:40:51.726387    9720 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 18:40:51.727197    9720 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 18:40:52.806090    9720 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 18:40:52.806576    9720 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-029800\config.json ...
	I0429 18:40:52.807275    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-029800\config.json: {Name:mkca4a7256bb2e7ace23355b51f223048bf7474b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:52.807759    9720 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 18:40:52.810051    9720 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-029800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-029800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 18:40:55.436590    5788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.5155631s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-029800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-029800: (1.4637067s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (11.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-657800 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-657800 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv: (11.3985516s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (11.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-657800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-657800: exit status 85 (317.6958ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-029800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | -p download-only-029800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| delete  | -p download-only-029800        | download-only-029800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| start   | -o=json --download-only        | download-only-657800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | -p download-only-657800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 18:40:58
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 18:40:58.830414    4176 out.go:291] Setting OutFile to fd 780 ...
	I0429 18:40:58.830791    4176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:40:58.830791    4176 out.go:304] Setting ErrFile to fd 784...
	I0429 18:40:58.830791    4176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:40:58.854573    4176 out.go:298] Setting JSON to true
	I0429 18:40:58.861200    4176 start.go:129] hostinfo: {"hostname":"minikube6","uptime":17998,"bootTime":1714398060,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 18:40:58.861200    4176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 18:40:59.006121    4176 out.go:97] [download-only-657800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 18:40:59.007490    4176 notify.go:220] Checking for updates...
	I0429 18:40:59.009892    4176 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 18:40:59.012418    4176 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 18:40:59.014525    4176 out.go:169] MINIKUBE_LOCATION=18774
	I0429 18:40:59.017647    4176 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0429 18:40:59.024955    4176 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 18:40:59.024955    4176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:41:04.614711    4176 out.go:97] Using the hyperv driver based on user configuration
	I0429 18:41:04.614763    4176 start.go:297] selected driver: hyperv
	I0429 18:41:04.614763    4176 start.go:901] validating driver "hyperv" against <nil>
	I0429 18:41:04.615123    4176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 18:41:04.669051    4176 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0429 18:41:04.670299    4176 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 18:41:04.670495    4176 cni.go:84] Creating CNI manager for ""
	I0429 18:41:04.670495    4176 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 18:41:04.670495    4176 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 18:41:04.670495    4176 start.go:340] cluster config:
	{Name:download-only-657800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-657800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:41:04.670495    4176 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:41:04.675967    4176 out.go:97] Starting "download-only-657800" primary control-plane node in "download-only-657800" cluster
	I0429 18:41:04.675967    4176 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 18:41:04.719292    4176 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 18:41:04.719292    4176 cache.go:56] Caching tarball of preloaded images
	I0429 18:41:04.720678    4176 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 18:41:04.724259    4176 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0429 18:41:04.724259    4176 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 18:41:04.791370    4176 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 18:41:07.819051    4176 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 18:41:07.820230    4176 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 18:41:08.781755    4176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 18:41:08.782979    4176 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-657800\config.json ...
	I0429 18:41:08.783334    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-657800\config.json: {Name:mk631fe416c0ead1fa08a2f381c9a93161f6441b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:41:08.783595    4176 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 18:41:08.784801    4176 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.30.0/kubectl.exe
	
	
	* The control-plane node download-only-657800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-657800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 18:41:10.143806   13988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (1.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4178164s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (1.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-657800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-657800: (1.4321421s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.43s)

                                                
                                    
x
+
TestBinaryMirror (7.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-491300 --alsologtostderr --binary-mirror http://127.0.0.1:52224 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-491300 --alsologtostderr --binary-mirror http://127.0.0.1:52224 --driver=hyperv: (6.628177s)
helpers_test.go:175: Cleaning up "binary-mirror-491300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-491300
--- PASS: TestBinaryMirror (7.63s)

                                                
                                    
x
+
TestOffline (446.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-186800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-186800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (6m39.2308027s)
helpers_test.go:175: Cleaning up "offline-docker-186800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-186800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-186800: (46.8341486s)
--- PASS: TestOffline (446.07s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.32s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-442400
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-442400: exit status 85 (319.8809ms)

                                                
                                                
-- stdout --
	* Profile "addons-442400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-442400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 18:41:23.951218   13516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.32s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.32s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-442400
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-442400: exit status 85 (316.5228ms)

                                                
                                                
-- stdout --
	* Profile "addons-442400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-442400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 18:41:23.956040   10384 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.32s)

                                                
                                    
x
+
TestAddons/Setup (406.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-442400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-442400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m46.0194418s)
--- PASS: TestAddons/Setup (406.02s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (71.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-442400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-442400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-442400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [21e0ed20-18bf-45f9-8dd1-ccf4dbb2a528] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [21e0ed20-18bf-45f9-8dd1-ccf4dbb2a528] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0172608s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.6267966s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-442400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0429 18:49:24.200411    2312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-442400 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 ip: (2.9655925s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.17.248.23
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 addons disable ingress-dns --alsologtostderr -v=1: (18.205742s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 addons disable ingress --alsologtostderr -v=1: (22.3545829s)
--- PASS: TestAddons/parallel/Ingress (71.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fxfn2" [a97dcda8-caec-41ca-b8a2-1406403f9fa1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0142519s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-442400
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-442400: (21.3770598s)
--- PASS: TestAddons/parallel/InspektorGadget (26.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 22.4159ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-ml72s" [a7d59c29-8cc7-438b-acd1-645e496c2ccf] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0203487s
addons_test.go:415: (dbg) Run:  kubectl --context addons-442400 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 addons disable metrics-server --alsologtostderr -v=1: (16.5480844s)
--- PASS: TestAddons/parallel/MetricsServer (22.79s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.36s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 5.9294ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-pjbhh" [88fefac1-a788-4a3d-9774-10960137a07d] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0197501s
addons_test.go:473: (dbg) Run:  kubectl --context addons-442400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-442400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.4443716s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 addons disable helm-tiller --alsologtostderr -v=1: (15.8710021s)
--- PASS: TestAddons/parallel/HelmTiller (30.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (118.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 30.5926ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-442400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-442400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e567d390-2263-4ab9-814b-f405e4f2ab0e] Pending
helpers_test.go:344: "task-pv-pod" [e567d390-2263-4ab9-814b-f405e4f2ab0e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e567d390-2263-4ab9-814b-f405e4f2ab0e] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 26.011881s
addons_test.go:584: (dbg) Run:  kubectl --context addons-442400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-442400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-442400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-442400 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-442400 delete pod task-pv-pod: (1.2686033s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-442400 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-442400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-442400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [795f40cc-705f-439e-a43a-47c6b79a9850] Pending
helpers_test.go:344: "task-pv-pod-restore" [795f40cc-705f-439e-a43a-47c6b79a9850] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [795f40cc-705f-439e-a43a-47c6b79a9850] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.013935s
addons_test.go:626: (dbg) Run:  kubectl --context addons-442400 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-442400 delete pod task-pv-pod-restore: (1.3098913s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-442400 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-442400 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (24.0030339s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 addons disable volumesnapshots --alsologtostderr -v=1: (15.9030257s)
--- PASS: TestAddons/parallel/CSI (118.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (41.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-442400 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-442400 --alsologtostderr -v=1: (18.3489395s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-6tr5k" [a212de60-c12a-47ba-80a5-66f73ac0d00b] Pending
helpers_test.go:344: "headlamp-7559bf459f-6tr5k" [a212de60-c12a-47ba-80a5-66f73ac0d00b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-6tr5k" [a212de60-c12a-47ba-80a5-66f73ac0d00b] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-6tr5k" [a212de60-c12a-47ba-80a5-66f73ac0d00b] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 23.0271299s
--- PASS: TestAddons/parallel/Headlamp (41.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-wxjzz" [286510aa-098b-468c-a037-19842a33ec20] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0122316s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-442400
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-442400: (15.7486886s)
--- PASS: TestAddons/parallel/CloudSpanner (21.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-442400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-442400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-442400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cfe9a651-345b-4800-966d-41510aeeeb39] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cfe9a651-345b-4800-966d-41510aeeeb39] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cfe9a651-345b-4800-966d-41510aeeeb39] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0209663s
addons_test.go:891: (dbg) Run:  kubectl --context addons-442400 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 ssh "cat /opt/local-path-provisioner/pvc-615aeca5-4422-4969-87de-5534dc276d28_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 ssh "cat /opt/local-path-provisioner/pvc-615aeca5-4422-4969-87de-5534dc276d28_default_test-pvc/file1": (10.3449813s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-442400 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-442400 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-442400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-442400 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (7.9099842s)
--- PASS: TestAddons/parallel/LocalPath (32.00s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fhh92" [87e776eb-c6ac-4427-a64e-7e7528da6e3e] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0122521s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-442400
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-442400: (17.513698s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-79778" [f4bc2d68-2be2-4c47-9c67-2f29e3a4fc3f] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0574543s
--- PASS: TestAddons/parallel/Yakd (5.06s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-442400 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-442400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (55.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-442400
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-442400: (42.6869424s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-442400
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-442400: (5.2399942s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-442400
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-442400: (5.0570793s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-442400
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-442400: (2.6184771s)
--- PASS: TestAddons/StoppedEnableDisable (55.60s)

                                                
                                    
x
+
TestDockerFlags (409.13s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-286800 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-286800 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (5m40.5784319s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-286800 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-286800 ssh "sudo systemctl show docker --property=Environment --no-pager": (11.060118s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-286800 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-286800 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.1260875s)
helpers_test.go:175: Cleaning up "docker-flags-286800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-286800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-286800: (47.3639805s)
--- PASS: TestDockerFlags (409.13s)

                                                
                                    
x
+
TestForceSystemdFlag (258.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-262400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-262400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m27.8851829s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-262400 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-262400 ssh "docker info --format {{.CgroupDriver}}": (10.1774278s)
helpers_test.go:175: Cleaning up "force-systemd-flag-262400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-262400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-262400: (40.7566699s)
--- PASS: TestForceSystemdFlag (258.82s)

                                                
                                    
x
+
TestForceSystemdEnv (420.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-265000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0429 21:33:10.247244   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 21:33:27.283681   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-265000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (6m2.2365017s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-265000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-265000 ssh "docker info --format {{.CgroupDriver}}": (10.4111724s)
helpers_test.go:175: Cleaning up "force-systemd-env-265000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-265000
E0429 21:39:33.514618   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-265000: (48.3350207s)
--- PASS: TestForceSystemdEnv (420.98s)

                                                
                                    
x
+
TestErrorSpam/start (17.98s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 start --dry-run: (5.9396705s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 start --dry-run: (6.0493768s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 start --dry-run: (5.98283s)
--- PASS: TestErrorSpam/start (17.98s)

                                                
                                    
x
+
TestErrorSpam/status (37.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 status: (12.8528774s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 status: (12.4038898s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 status: (12.4377567s)
--- PASS: TestErrorSpam/status (37.70s)

                                                
                                    
x
+
TestErrorSpam/pause (23.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 pause: (8.0974578s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 pause: (7.7692267s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 pause: (7.7342673s)
--- PASS: TestErrorSpam/pause (23.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 unpause: (8.0888971s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 unpause: (8.0019626s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 unpause: (7.8537336s)
--- PASS: TestErrorSpam/unpause (23.95s)

                                                
                                    
x
+
TestErrorSpam/stop (63.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 stop
E0429 18:58:10.186529   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 stop: (41.3340656s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 stop: (11.2999713s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 stop
E0429 18:58:37.999489   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-472100 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-472100 stop: (10.8484283s)
--- PASS: TestErrorSpam/stop (63.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\13756\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (247.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-980800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-980800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (4m7.3708042s)
--- PASS: TestFunctional/serial/StartWithProxy (247.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (130.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-980800 --alsologtostderr -v=8
E0429 19:03:10.179420   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-980800 --alsologtostderr -v=8: (2m10.7752087s)
functional_test.go:659: soft start took 2m10.7769606s for "functional-980800" cluster.
--- PASS: TestFunctional/serial/SoftStart (130.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.16s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-980800 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 cache add registry.k8s.io/pause:3.1: (9.2076979s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 cache add registry.k8s.io/pause:3.3: (8.8034848s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 cache add registry.k8s.io/pause:latest: (8.8373216s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-980800 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3344355902\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-980800 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3344355902\001: (2.923069s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 cache add minikube-local-cache-test:functional-980800
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 cache add minikube-local-cache-test:functional-980800: (8.4928143s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 cache delete minikube-local-cache-test:functional-980800
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-980800
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh sudo crictl images: (9.6375334s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (36.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.5650222s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-980800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.4834393s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:06:18.605602    8356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 cache reload: (8.4330834s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.4993579s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (36.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.60s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 kubectl -- --context functional-980800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (130.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-980800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0429 19:08:10.191767   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-980800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m10.1005995s)
functional_test.go:757: restart took 2m10.101541s for "functional-980800" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (130.10s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-980800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 logs
E0429 19:09:33.377238   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 logs: (8.8004964s)
--- PASS: TestFunctional/serial/LogsCmd (8.80s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1891128081\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1891128081\001\logs.txt: (10.9926791s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.00s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-980800 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-980800
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-980800: exit status 115 (17.000004s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.17.245.90:30373 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:09:55.511753    6252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-980800 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (21.37s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (41.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 status: (14.7927472s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.48564s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 status -o json
E0429 19:13:10.180910   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 status -o json: (12.5587671s)
--- PASS: TestFunctional/parallel/StatusCmd (41.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-980800 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-980800 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-6sp5d" [22e106ce-07da-4d77-8075-d7c0fbb19967] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-6sp5d" [22e106ce-07da-4d77-8075-d7c0fbb19967] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.0192733s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 service hello-node-connect --url: (19.4190667s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.17.245.90:32082
functional_test.go:1671: http://172.17.245.90:32082: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-6sp5d

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.17.245.90:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.17.245.90:32082
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (36.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cb1b2baa-391c-407a-a97d-23d3d0d29f13] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0146831s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-980800 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-980800 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-980800 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-980800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [786957bc-c1af-437c-b9af-3c8ce3802703] Pending
helpers_test.go:344: "sp-pod" [786957bc-c1af-437c-b9af-3c8ce3802703] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [786957bc-c1af-437c-b9af-3c8ce3802703] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.0207945s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-980800 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-980800 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-980800 delete -f testdata/storage-provisioner/pod.yaml: (1.7980015s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-980800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [28367f63-f8b3-4e62-ab90-bd1162f06a16] Pending
helpers_test.go:344: "sp-pod" [28367f63-f8b3-4e62-ab90-bd1162f06a16] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [28367f63-f8b3-4e62-ab90-bd1162f06a16] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.0205159s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-980800 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (25.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh "echo hello": (13.1866509s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh "cat /etc/hostname": (11.8646665s)
--- PASS: TestFunctional/parallel/SSHCmd (25.05s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (61.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 cp testdata\cp-test.txt /home/docker/cp-test.txt: (10.2696966s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh -n functional-980800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh -n functional-980800 "sudo cat /home/docker/cp-test.txt": (11.910948s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 cp functional-980800:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd166203373\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 cp functional-980800:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd166203373\001\cp-test.txt: (10.8109597s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh -n functional-980800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh -n functional-980800 "sudo cat /home/docker/cp-test.txt": (10.3273355s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.459489s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh -n functional-980800 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh -n functional-980800 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.1490212s)
--- PASS: TestFunctional/parallel/CpCmd (61.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (65.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-980800 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-z8jlq" [fd2a9bdc-05c1-4122-88f9-8295d639a74b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-z8jlq" [fd2a9bdc-05c1-4122-88f9-8295d639a74b] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 51.0146193s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;": exit status 1 (386.4788ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;": exit status 1 (309.3124ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;": exit status 1 (334.5541ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;": exit status 1 (316.1589ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;": exit status 1 (324.1042ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-980800 exec mysql-64454c8b5c-z8jlq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (65.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13756/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /etc/test/nested/copy/13756/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /etc/test/nested/copy/13756/hosts": (11.7307204s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.73s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (66.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13756.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /etc/ssl/certs/13756.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /etc/ssl/certs/13756.pem": (10.7785485s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13756.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /usr/share/ca-certificates/13756.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /usr/share/ca-certificates/13756.pem": (10.8958732s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.0104984s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/137562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /etc/ssl/certs/137562.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /etc/ssl/certs/137562.pem": (11.5206543s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/137562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /usr/share/ca-certificates/137562.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /usr/share/ca-certificates/137562.pem": (11.1443844s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.9760573s)
--- PASS: TestFunctional/parallel/CertSync (66.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-980800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (12.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-980800 ssh "sudo systemctl is-active crio": exit status 1 (12.2986017s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:10:20.610706    6444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (12.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (4.2912364s)
--- PASS: TestFunctional/parallel/License (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-980800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-980800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-980800 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-980800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12760: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 8912: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 version -o=json --components: (8.2385082s)
--- PASS: TestFunctional/parallel/Version/components (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-980800 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-980800 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Done: kubectl --context functional-980800 apply -f testdata\testsvc.yaml: (1.0290442s)
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [987a73c2-5ce0-4152-a006-e39e2903c4bb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [987a73c2-5ce0-4152-a006-e39e2903c4bb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.0144973s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image ls --format short --alsologtostderr: (7.742768s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-980800 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-980800
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-980800
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-980800 image ls --format short --alsologtostderr:
W0429 19:13:30.172850    1788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 19:13:30.264026    1788 out.go:291] Setting OutFile to fd 1332 ...
I0429 19:13:30.264955    1788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 19:13:30.264955    1788 out.go:304] Setting ErrFile to fd 1296...
I0429 19:13:30.264955    1788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 19:13:30.284131    1788 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 19:13:30.284707    1788 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 19:13:30.285786    1788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
I0429 19:13:32.537968    1788 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 19:13:32.537968    1788 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:32.558350    1788 ssh_runner.go:195] Run: systemctl --version
I0429 19:13:32.558350    1788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
I0429 19:13:34.813295    1788 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 19:13:34.813295    1788 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:34.814177    1788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
I0429 19:13:37.485154    1788 main.go:141] libmachine: [stdout =====>] : 172.17.245.90

                                                
                                                
I0429 19:13:37.485154    1788 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:37.485925    1788 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
I0429 19:13:37.588732    1788 ssh_runner.go:235] Completed: systemctl --version: (5.0303449s)
I0429 19:13:37.600793    1788 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image ls --format table --alsologtostderr: (7.6483249s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-980800 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-980800 | 8a206986fd863 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/google-containers/addon-resizer      | functional-980800 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | latest            | 7383c266ef252 | 188MB  |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | alpine            | f4215f6ee683f | 48.3MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-980800 image ls --format table --alsologtostderr:
W0429 19:13:47.015957   13568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 19:13:47.112706   13568 out.go:291] Setting OutFile to fd 1360 ...
I0429 19:13:47.113704   13568 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 19:13:47.113704   13568 out.go:304] Setting ErrFile to fd 1344...
I0429 19:13:47.113704   13568 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 19:13:47.134157   13568 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 19:13:47.135096   13568 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 19:13:47.135096   13568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
I0429 19:13:49.355509   13568 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 19:13:49.355509   13568 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:49.371854   13568 ssh_runner.go:195] Run: systemctl --version
I0429 19:13:49.372844   13568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
I0429 19:13:51.645562   13568 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 19:13:51.645622   13568 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:51.645622   13568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
I0429 19:13:54.342030   13568 main.go:141] libmachine: [stdout =====>] : 172.17.245.90

                                                
                                                
I0429 19:13:54.342351   13568 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:54.342490   13568 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
I0429 19:13:54.447812   13568 ssh_runner.go:235] Completed: systemctl --version: (5.0749301s)
I0429 19:13:54.459958   13568 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image ls --format json --alsologtostderr: (7.5781491s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-980800 image ls --format json --alsologtostderr:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":[]
,"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"8a206986fd863d554a0adebad6832395bf94f9c6c5c34ba2d55f9f80771200cf","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-980800"],"size":"30"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/core
dns:v1.11.1"],"size":"59800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-980800"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-980800 image ls --format json --alsologtostderr:
W0429 19:13:39.454116   13652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 19:13:39.541126   13652 out.go:291] Setting OutFile to fd 1188 ...
I0429 19:13:39.542123   13652 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 19:13:39.542123   13652 out.go:304] Setting ErrFile to fd 820...
I0429 19:13:39.542123   13652 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 19:13:39.558130   13652 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 19:13:39.559141   13652 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 19:13:39.559141   13652 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
I0429 19:13:41.748220   13652 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 19:13:41.748312   13652 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:41.765345   13652 ssh_runner.go:195] Run: systemctl --version
I0429 19:13:41.765345   13652 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
I0429 19:13:44.003521   13652 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 19:13:44.003521   13652 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:44.004558   13652 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
I0429 19:13:46.680664   13652 main.go:141] libmachine: [stdout =====>] : 172.17.245.90

                                                
                                                
I0429 19:13:46.681225   13652 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:46.681689   13652 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
I0429 19:13:46.788786   13652 ssh_runner.go:235] Completed: systemctl --version: (5.0234035s)
I0429 19:13:46.801461   13652 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image ls --format yaml --alsologtostderr: (7.5928619s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-980800 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-980800
size: "32900000"
- id: f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8a206986fd863d554a0adebad6832395bf94f9c6c5c34ba2d55f9f80771200cf
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-980800
size: "30"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-980800 image ls --format yaml --alsologtostderr:
W0429 19:13:31.830108   12552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 19:13:31.928569   12552 out.go:291] Setting OutFile to fd 1304 ...
I0429 19:13:31.946920   12552 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 19:13:31.946920   12552 out.go:304] Setting ErrFile to fd 1128...
I0429 19:13:31.946920   12552 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 19:13:31.964123   12552 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 19:13:31.964123   12552 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 19:13:31.965280   12552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
I0429 19:13:34.178368   12552 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 19:13:34.178499   12552 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:34.194833   12552 ssh_runner.go:195] Run: systemctl --version
I0429 19:13:34.195847   12552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
I0429 19:13:36.408970   12552 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 19:13:36.410069   12552 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:36.410212   12552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
I0429 19:13:39.092570   12552 main.go:141] libmachine: [stdout =====>] : 172.17.245.90

                                                
                                                
I0429 19:13:39.092570   12552 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:39.093175   12552 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
I0429 19:13:39.201905   12552 ssh_runner.go:235] Completed: systemctl --version: (5.0069727s)
I0429 19:13:39.213947   12552 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (27.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-980800 ssh pgrep buildkitd: exit status 1 (9.8184543s)

                                                
                                                
** stderr ** 
	W0429 19:13:37.931154   12912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image build -t localhost/my-image:functional-980800 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image build -t localhost/my-image:functional-980800 testdata\build --alsologtostderr: (10.2284058s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-980800 image build -t localhost/my-image:functional-980800 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 50eb71717842
---> Removed intermediate container 50eb71717842
---> 31551e6cb13e
Step 3/3 : ADD content.txt /
---> dc9247881951
Successfully built dc9247881951
Successfully tagged localhost/my-image:functional-980800
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-980800 image build -t localhost/my-image:functional-980800 testdata\build --alsologtostderr:
W0429 19:13:47.734126    7300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 19:13:47.821659    7300 out.go:291] Setting OutFile to fd 1376 ...
I0429 19:13:47.842110    7300 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 19:13:47.842110    7300 out.go:304] Setting ErrFile to fd 1044...
I0429 19:13:47.842110    7300 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 19:13:47.860951    7300 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 19:13:47.876915    7300 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 19:13:47.877913    7300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
I0429 19:13:50.098374    7300 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 19:13:50.098374    7300 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:50.113545    7300 ssh_runner.go:195] Run: systemctl --version
I0429 19:13:50.113545    7300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-980800 ).state
I0429 19:13:52.355186    7300 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 19:13:52.356181    7300 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:52.356332    7300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-980800 ).networkadapters[0]).ipaddresses[0]
I0429 19:13:55.001739    7300 main.go:141] libmachine: [stdout =====>] : 172.17.245.90

                                                
                                                
I0429 19:13:55.001739    7300 main.go:141] libmachine: [stderr =====>] : 
I0429 19:13:55.002582    7300 sshutil.go:53] new ssh client: &{IP:172.17.245.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-980800\id_rsa Username:docker}
I0429 19:13:55.127695    7300 ssh_runner.go:235] Completed: systemctl --version: (5.0140154s)
I0429 19:13:55.127766    7300 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.1942913591.tar
I0429 19:13:55.143416    7300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 19:13:55.180037    7300 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1942913591.tar
I0429 19:13:55.189211    7300 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1942913591.tar: stat -c "%s %y" /var/lib/minikube/build/build.1942913591.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1942913591.tar': No such file or directory
I0429 19:13:55.189554    7300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.1942913591.tar --> /var/lib/minikube/build/build.1942913591.tar (3072 bytes)
I0429 19:13:55.256120    7300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1942913591
I0429 19:13:55.294802    7300 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1942913591 -xf /var/lib/minikube/build/build.1942913591.tar
I0429 19:13:55.337572    7300 docker.go:360] Building image: /var/lib/minikube/build/build.1942913591
I0429 19:13:55.348678    7300 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-980800 /var/lib/minikube/build/build.1942913591
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0429 19:13:57.732426    7300 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-980800 /var/lib/minikube/build/build.1942913591: (2.3837303s)
I0429 19:13:57.746419    7300 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1942913591
I0429 19:13:57.787154    7300 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1942913591.tar
I0429 19:13:57.808703    7300 build_images.go:217] Built localhost/my-image:functional-980800 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.1942913591.tar
I0429 19:13:57.808703    7300 build_images.go:133] succeeded building to: functional-980800
I0429 19:13:57.808703    7300 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image ls: (7.3745169s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (27.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.852358s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-980800
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image load --daemon gcr.io/google-containers/addon-resizer:functional-980800 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image load --daemon gcr.io/google-containers/addon-resizer:functional-980800 --alsologtostderr: (16.0294549s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image ls: (8.0584655s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.9579s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-980800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9832: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.5247232s)
functional_test.go:1311: Took "10.5252949s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "272.1763ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.80s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.6756634s)
functional_test.go:1362: Took "10.6761863s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "299.6866ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image load --daemon gcr.io/google-containers/addon-resizer:functional-980800 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image load --daemon gcr.io/google-containers/addon-resizer:functional-980800 --alsologtostderr: (12.8043983s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image ls: (7.5853536s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-980800 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-980800 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-gbnht" [b9e6d8f2-d39e-4a4f-b1cb-d69610f845ad] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-gbnht" [b9e6d8f2-d39e-4a4f-b1cb-d69610f845ad] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.0115102s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (28.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.9494523s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-980800
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image load --daemon gcr.io/google-containers/addon-resizer:functional-980800 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image load --daemon gcr.io/google-containers/addon-resizer:functional-980800 --alsologtostderr: (15.941452s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image ls: (8.5590923s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (28.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 service list: (14.4844598s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 service list -o json: (14.4335807s)
functional_test.go:1490: Took "14.4335807s" to run "out/minikube-windows-amd64.exe -p functional-980800 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.43s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (48.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-980800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-980800"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-980800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-980800": (31.6569929s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-980800 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-980800 docker-env | Invoke-Expression ; docker images": (16.8603037s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (48.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image save gcr.io/google-containers/addon-resizer:functional-980800 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image save gcr.io/google-containers/addon-resizer:functional-980800 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.2908745s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image rm gcr.io/google-containers/addon-resizer:functional-980800 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image rm gcr.io/google-containers/addon-resizer:functional-980800 --alsologtostderr: (8.9033399s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image ls: (8.9270708s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (19.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.197126s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image ls: (8.7501913s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (19.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 update-context --alsologtostderr -v=2: (2.5047278s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 update-context --alsologtostderr -v=2: (2.6348287s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 update-context --alsologtostderr -v=2: (2.5306133s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-980800
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-980800 image save --daemon gcr.io/google-containers/addon-resizer:functional-980800 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-980800 image save --daemon gcr.io/google-containers/addon-resizer:functional-980800 --alsologtostderr: (11.3364808s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-980800
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.80s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.5s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-980800
--- PASS: TestFunctional/delete_addon-resizer_images (0.50s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-980800
--- PASS: TestFunctional/delete_my-image_image (0.18s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-980800
--- PASS: TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (724.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-513500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0429 19:20:23.959239   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:23.974411   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:23.989621   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:24.020624   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:24.067247   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:24.161294   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:24.335431   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:24.669745   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:25.322524   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:26.627562   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:29.193995   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:34.329384   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:20:44.580464   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:21:05.062251   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:21:46.030864   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:23:07.962085   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:23:10.197831   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 19:25:23.964062   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:25:51.812040   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:26:13.391643   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 19:28:10.188233   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 19:30:23.968920   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-513500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m27.0028442s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 status -v=7 --alsologtostderr: (37.1584326s)
--- PASS: TestMultiControlPlane/serial/StartCluster (724.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-513500 -- rollout status deployment/busybox: (3.835527s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7nt6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7nt6 -- nslookup kubernetes.io: (2.2870182s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7rdw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7rdw -- nslookup kubernetes.io: (1.6837191s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-txsvr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7nt6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7rdw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-txsvr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7nt6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-k7rdw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-513500 -- exec busybox-fc5497c4f-txsvr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (257.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-513500 -v=7 --alsologtostderr
E0429 19:33:10.195747   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 19:35:23.964697   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-513500 -v=7 --alsologtostderr: (3m28.7555173s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 status -v=7 --alsologtostderr
E0429 19:36:47.193017   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 status -v=7 --alsologtostderr: (48.9916453s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (257.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-513500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (28.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (28.9590025s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (28.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (642.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 status --output json -v=7 --alsologtostderr
E0429 19:38:10.192287   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 status --output json -v=7 --alsologtostderr: (49.3817796s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp testdata\cp-test.txt ha-513500:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp testdata\cp-test.txt ha-513500:/home/docker/cp-test.txt: (9.7078363s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test.txt": (9.7661037s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile875035895\001\cp-test_ha-513500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile875035895\001\cp-test_ha-513500.txt: (9.7120619s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test.txt": (9.709707s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500:/home/docker/cp-test.txt ha-513500-m02:/home/docker/cp-test_ha-513500_ha-513500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500:/home/docker/cp-test.txt ha-513500-m02:/home/docker/cp-test_ha-513500_ha-513500-m02.txt: (17.0656843s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test.txt": (9.7122464s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test_ha-513500_ha-513500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test_ha-513500_ha-513500-m02.txt": (9.6381653s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500:/home/docker/cp-test.txt ha-513500-m03:/home/docker/cp-test_ha-513500_ha-513500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500:/home/docker/cp-test.txt ha-513500-m03:/home/docker/cp-test_ha-513500_ha-513500-m03.txt: (17.0332831s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test.txt": (9.6542649s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test_ha-513500_ha-513500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test_ha-513500_ha-513500-m03.txt": (9.6598449s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500:/home/docker/cp-test.txt ha-513500-m04:/home/docker/cp-test_ha-513500_ha-513500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500:/home/docker/cp-test.txt ha-513500-m04:/home/docker/cp-test_ha-513500_ha-513500-m04.txt: (16.7702086s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test.txt"
E0429 19:40:23.968465   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test.txt": (9.6971697s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test_ha-513500_ha-513500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test_ha-513500_ha-513500-m04.txt": (9.6239221s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp testdata\cp-test.txt ha-513500-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp testdata\cp-test.txt ha-513500-m02:/home/docker/cp-test.txt: (9.6653386s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test.txt": (9.7445006s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile875035895\001\cp-test_ha-513500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile875035895\001\cp-test_ha-513500-m02.txt: (9.7138629s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test.txt": (9.5742838s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m02:/home/docker/cp-test.txt ha-513500:/home/docker/cp-test_ha-513500-m02_ha-513500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m02:/home/docker/cp-test.txt ha-513500:/home/docker/cp-test_ha-513500-m02_ha-513500.txt: (16.9732759s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test.txt": (10.0431454s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test_ha-513500-m02_ha-513500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test_ha-513500-m02_ha-513500.txt": (9.8111939s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m02:/home/docker/cp-test.txt ha-513500-m03:/home/docker/cp-test_ha-513500-m02_ha-513500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m02:/home/docker/cp-test.txt ha-513500-m03:/home/docker/cp-test_ha-513500-m02_ha-513500-m03.txt: (16.9225487s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test.txt": (9.8102278s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test_ha-513500-m02_ha-513500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test_ha-513500-m02_ha-513500-m03.txt": (9.7959983s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m02:/home/docker/cp-test.txt ha-513500-m04:/home/docker/cp-test_ha-513500-m02_ha-513500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m02:/home/docker/cp-test.txt ha-513500-m04:/home/docker/cp-test_ha-513500-m02_ha-513500-m04.txt: (16.9530942s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test.txt"
E0429 19:42:53.400160   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test.txt": (9.8409381s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test_ha-513500-m02_ha-513500-m04.txt"
E0429 19:43:10.194196   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test_ha-513500-m02_ha-513500-m04.txt": (9.928803s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp testdata\cp-test.txt ha-513500-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp testdata\cp-test.txt ha-513500-m03:/home/docker/cp-test.txt: (9.7992699s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test.txt": (9.7448271s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile875035895\001\cp-test_ha-513500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile875035895\001\cp-test_ha-513500-m03.txt: (9.8448644s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test.txt": (9.6639899s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt ha-513500:/home/docker/cp-test_ha-513500-m03_ha-513500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt ha-513500:/home/docker/cp-test_ha-513500-m03_ha-513500.txt: (17.0704336s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test.txt": (9.8100475s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test_ha-513500-m03_ha-513500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test_ha-513500-m03_ha-513500.txt": (9.7677184s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt ha-513500-m02:/home/docker/cp-test_ha-513500-m03_ha-513500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt ha-513500-m02:/home/docker/cp-test_ha-513500-m03_ha-513500-m02.txt: (17.048542s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test.txt": (9.8453628s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test_ha-513500-m03_ha-513500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test_ha-513500-m03_ha-513500-m02.txt": (9.7096303s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt ha-513500-m04:/home/docker/cp-test_ha-513500-m03_ha-513500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m03:/home/docker/cp-test.txt ha-513500-m04:/home/docker/cp-test_ha-513500-m03_ha-513500-m04.txt: (16.9471061s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test.txt"
E0429 19:45:23.980282   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test.txt": (9.7013647s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test_ha-513500-m03_ha-513500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test_ha-513500-m03_ha-513500-m04.txt": (9.7480007s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp testdata\cp-test.txt ha-513500-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp testdata\cp-test.txt ha-513500-m04:/home/docker/cp-test.txt: (9.8232865s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test.txt": (9.7054188s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile875035895\001\cp-test_ha-513500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile875035895\001\cp-test_ha-513500-m04.txt: (9.7449607s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test.txt": (9.6465251s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt ha-513500:/home/docker/cp-test_ha-513500-m04_ha-513500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt ha-513500:/home/docker/cp-test_ha-513500-m04_ha-513500.txt: (16.9213484s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test.txt": (9.7034568s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test_ha-513500-m04_ha-513500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500 "sudo cat /home/docker/cp-test_ha-513500-m04_ha-513500.txt": (9.7563405s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt ha-513500-m02:/home/docker/cp-test_ha-513500-m04_ha-513500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt ha-513500-m02:/home/docker/cp-test_ha-513500-m04_ha-513500-m02.txt: (16.9230857s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test.txt": (9.7956036s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test_ha-513500-m04_ha-513500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m02 "sudo cat /home/docker/cp-test_ha-513500-m04_ha-513500-m02.txt": (9.7308947s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt ha-513500-m03:/home/docker/cp-test_ha-513500-m04_ha-513500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 cp ha-513500-m04:/home/docker/cp-test.txt ha-513500-m03:/home/docker/cp-test_ha-513500-m04_ha-513500-m03.txt: (16.9552083s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m04 "sudo cat /home/docker/cp-test.txt": (9.7856686s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test_ha-513500-m04_ha-513500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-513500 ssh -n ha-513500-m03 "sudo cat /home/docker/cp-test_ha-513500-m04_ha-513500-m03.txt": (9.7438596s)
--- PASS: TestMultiControlPlane/serial/CopyFile (642.88s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (202.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-342000 --driver=hyperv
E0429 19:53:10.208503   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 19:53:27.203063   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 19:55:23.981210   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-342000 --driver=hyperv: (3m22.0276637s)
--- PASS: TestImageBuild/serial/Setup (202.03s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-342000
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-342000: (9.904292s)
--- PASS: TestImageBuild/serial/NormalBuild (9.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-342000
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-342000: (9.386268s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-342000
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-342000: (7.9047815s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-342000
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-342000: (7.7545997s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (244.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-757400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0429 19:58:10.211572   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 19:59:33.416800   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 20:00:23.983058   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-757400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m4.1071921s)
--- PASS: TestJSONOutput/start/Command (244.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.06s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-757400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-757400 --output=json --user=testUser: (8.0604265s)
--- PASS: TestJSONOutput/pause/Command (8.06s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.09s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-757400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-757400 --output=json --user=testUser: (8.0939625s)
--- PASS: TestJSONOutput/unpause/Command (8.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (40.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-757400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-757400 --output=json --user=testUser: (40.1305516s)
--- PASS: TestJSONOutput/stop/Command (40.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.59s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-203900 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-203900 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (315.561ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a3b67fa7-7cda-40d9-9fdb-7e953f11de29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-203900] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"599eb427-8998-4255-9d2c-4466b1e06e81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"79a612ba-4d3d-4231-babc-e19130cf6a25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"21f4ed87-a497-4fed-b7fa-e7e10208417a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"d7e88c0a-7b54-47bf-9d8b-f206bc4f38ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18774"}}
	{"specversion":"1.0","id":"0646368d-0866-4f0a-8bdc-15aaceb35375","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a314b091-ef5c-4bc6-8859-685d0f8a225c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 20:02:15.930402   13948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-203900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-203900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-203900: (1.2775334s)
--- PASS: TestErrorJSONOutput (1.59s)

                                                
                                    
x
+
TestMainNoArgs (0.27s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.27s)

                                                
                                    
x
+
TestMinikubeProfile (526.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-247400 --driver=hyperv
E0429 20:03:10.210217   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 20:05:23.987791   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-247400 --driver=hyperv: (3m21.1869201s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-247400 --driver=hyperv
E0429 20:08:10.215653   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-247400 --driver=hyperv: (3m22.4942997s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-247400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.355714s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-247400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.3514069s)
helpers_test.go:175: Cleaning up "second-247400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-247400
E0429 20:10:07.215122   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-247400: (41.5977258s)
helpers_test.go:175: Cleaning up "first-247400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-247400
E0429 20:10:23.981274   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-247400: (41.1071134s)
--- PASS: TestMinikubeProfile (526.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (159.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-089600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0429 20:13:10.218584   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-089600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m38.1465592s)
--- PASS: TestMountStart/serial/StartWithMountFirst (159.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.7s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-089600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-089600 ssh -- ls /minikube-host: (9.6981044s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (159.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-089600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0429 20:15:23.984828   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 20:16:13.437820   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-089600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m38.8986518s)
--- PASS: TestMountStart/serial/StartWithMountSecond (159.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.69s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-089600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-089600 ssh -- ls /minikube-host: (9.6917448s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.69s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (27.96s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-089600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-089600 --alsologtostderr -v=5: (27.9608689s)
--- PASS: TestMountStart/serial/DeleteFirst (27.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.66s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-089600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-089600 ssh -- ls /minikube-host: (9.6602716s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.66s)

                                                
                                    
x
+
TestMountStart/serial/Stop (27.12s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-089600
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-089600: (27.1219556s)
--- PASS: TestMountStart/serial/Stop (27.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-515700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (10.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.0413177s)
--- PASS: TestMultiNode/serial/ProfileList (10.04s)

                                                
                                    
x
+
TestPreload (533.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-385300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0429 20:58:10.227632   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 21:00:07.258098   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 21:00:24.010525   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-385300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m35.5958443s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-385300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-385300 image pull gcr.io/k8s-minikube/busybox: (8.5769249s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-385300
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-385300: (40.7301791s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-385300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0429 21:03:10.232753   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-385300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m37.9532953s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-385300 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-385300 image list: (7.5889147s)
helpers_test.go:175: Cleaning up "test-preload-385300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-385300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-385300: (42.9613073s)
--- PASS: TestPreload (533.41s)

                                                
                                    
x
+
TestScheduledStopWindows (339.36s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-424300 --memory=2048 --driver=hyperv
E0429 21:05:24.017227   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
E0429 21:06:13.486213   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 21:08:10.237554   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-424300 --memory=2048 --driver=hyperv: (3m24.8178982s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-424300 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-424300 --schedule 5m: (11.1400453s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-424300 -n scheduled-stop-424300
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-424300 -n scheduled-stop-424300: exit status 1 (10.0169121s)

                                                
                                                
** stderr ** 
	W0429 21:08:27.303513    4432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-424300 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-424300 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.9073989s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-424300 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-424300 --schedule 5s: (10.9972795s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-424300
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-424300: exit status 7 (2.4987454s)

                                                
                                                
-- stdout --
	scheduled-stop-424300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:09:58.242697    6364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-424300 -n scheduled-stop-424300
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-424300 -n scheduled-stop-424300: exit status 7 (2.4561106s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:10:00.737208     756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-424300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-424300
E0429 21:10:24.019510   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-424300: (27.5118924s)
--- PASS: TestScheduledStopWindows (339.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1006.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.1899693053.exe start -p running-upgrade-013100 --memory=2200 --vm-driver=hyperv
E0429 21:16:47.271375   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.1899693053.exe start -p running-upgrade-013100 --memory=2200 --vm-driver=hyperv: (7m50.7271327s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-013100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0429 21:25:24.029439   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-013100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m50.1571699s)
helpers_test.go:175: Cleaning up "running-upgrade-013100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-013100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-013100: (1m5.1409511s)
--- PASS: TestRunningBinaryUpgrade (1006.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-262400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-262400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (466.9511ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-262400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 21:10:30.739578   12364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (865.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.760131333.exe start -p stopped-upgrade-467400 --memory=2200 --vm-driver=hyperv
E0429 21:15:24.024661   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-980800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.760131333.exe start -p stopped-upgrade-467400 --memory=2200 --vm-driver=hyperv: (6m16.1592129s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.760131333.exe -p stopped-upgrade-467400 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.760131333.exe -p stopped-upgrade-467400 stop: (37.1209123s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-467400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0429 21:22:53.500813   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
E0429 21:23:10.252388   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-467400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m32.6171423s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (865.90s)

                                                
                                    
x
+
TestPause/serial/Start (509.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-416800 --memory=2048 --install-addons=false --wait=all --driver=hyperv
E0429 21:18:10.250788   13756 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-442400\client.crt: The system cannot find the path specified.
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-416800 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (8m29.7689758s)
--- PASS: TestPause/serial/Start (509.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-467400
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-467400: (10.1805953s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.18s)

                                                
                                    

Test skip (30/198)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-980800 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-980800 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 12972: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-980800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-980800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0404746s)

                                                
                                                
-- stdout --
	* [functional-980800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:12:50.962648    2840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 19:12:51.045256    2840 out.go:291] Setting OutFile to fd 1160 ...
	I0429 19:12:51.046115    2840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:12:51.046175    2840 out.go:304] Setting ErrFile to fd 1324...
	I0429 19:12:51.046175    2840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:12:51.070450    2840 out.go:298] Setting JSON to false
	I0429 19:12:51.074459    2840 start.go:129] hostinfo: {"hostname":"minikube6","uptime":19910,"bootTime":1714398060,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 19:12:51.074459    2840 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 19:12:51.079475    2840 out.go:177] * [functional-980800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 19:12:51.083482    2840 notify.go:220] Checking for updates...
	I0429 19:12:51.085462    2840 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:12:51.088471    2840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:12:51.090483    2840 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 19:12:51.093519    2840 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:12:51.095475    2840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:12:51.099471    2840 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:12:51.099471    2840 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-980800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-980800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.032984s)

                                                
                                                
-- stdout --
	* [functional-980800] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 19:12:38.650877    6256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 19:12:38.737520    6256 out.go:291] Setting OutFile to fd 752 ...
	I0429 19:12:38.737520    6256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:12:38.737520    6256 out.go:304] Setting ErrFile to fd 844...
	I0429 19:12:38.737520    6256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:12:38.763517    6256 out.go:298] Setting JSON to false
	I0429 19:12:38.767557    6256 start.go:129] hostinfo: {"hostname":"minikube6","uptime":19898,"bootTime":1714398060,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 19:12:38.767557    6256 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 19:12:38.776494    6256 out.go:177] * [functional-980800] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 19:12:38.779507    6256 notify.go:220] Checking for updates...
	I0429 19:12:38.782503    6256 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 19:12:38.784491    6256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:12:38.788498    6256 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 19:12:38.790490    6256 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:12:38.793574    6256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:12:38.796517    6256 config.go:182] Loaded profile config "functional-980800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 19:12:38.797494    6256 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard